Sample records for time distribution method

  1. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks

    PubMed Central

    Lam, William H. K.; Li, Qingquan

    2017-01-01

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978

  2. Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.

    PubMed

    Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan

    2017-12-06

    Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.

  3. Optimal methods for fitting probability distributions to propagule retention time in studies of zoochorous dispersal.

    PubMed

    Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi

    2016-02-01

    Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.

  4. Accuracy of time-domain and frequency-domain methods used to characterize catchment transit time distributions

    NASA Astrophysics Data System (ADS)

    Godsey, S. E.; Kirchner, J. W.

    2008-12-01

    The mean residence time - the average time that it takes rainfall to reach the stream - is a basic parameter used to characterize catchment processes. Heterogeneities in these processes lead to a distribution of travel times around the mean residence time. By examining this travel time distribution, we can better predict catchment response to contamination events. A catchment system with shorter residence times or narrower distributions will respond quickly to contamination events, whereas systems with longer residence times or longer-tailed distributions will respond more slowly to those same contamination events. The travel time distribution of a catchment is typically inferred from time series of passive tracers (e.g., water isotopes or chloride) in precipitation and streamflow. Variations in the tracer concentration in streamflow are usually damped compared to those in precipitation, because precipitation inputs from different storms (with different tracer signatures) are mixed within the catchment. Mathematically, this mixing process is represented by the convolution of the travel time distribution and the precipitation tracer inputs to generate the stream tracer outputs. Because convolution in the time domain is equivalent to multiplication in the frequency domain, it is relatively straightforward to estimate the parameters of the travel time distribution in either domain. In the time domain, the parameters describing the travel time distribution are typically estimated by maximizing the goodness of fit between the modeled and measured tracer outputs. In the frequency domain, the travel time distribution parameters can be estimated by fitting a power-law curve to the ratio of precipitation spectral power to stream spectral power. Differences between the methods of parameter estimation in the time and frequency domain mean that these two methods may respond differently to variations in data quality, record length and sampling frequency. Here we evaluate how well these two methods of travel time parameter estimation respond to different sources of uncertainty and compare the methods to one another. We do this by generating synthetic tracer input time series of different lengths, and convolve these with specified travel-time distributions to generate synthetic output time series. We then sample both the input and output time series at various sampling intervals and corrupt the time series with realistic error structures. Using these 'corrupted' time series, we infer the apparent travel time distribution, and compare it to the known distribution that was used to generate the synthetic data in the first place. This analysis allows us to quantify how different record lengths, sampling intervals, and error structures in the tracer measurements affect the apparent mean residence time and the apparent shape of the travel time distribution.

  5. A performance analysis method for distributed real-time robotic systems: A case study of remote teleoperation

    NASA Technical Reports Server (NTRS)

    Lefebvre, D. R.; Sanderson, A. C.

    1994-01-01

    Robot coordination and control systems for remote teleoperation applications are by necessity implemented on distributed computers. Modeling and performance analysis of these distributed robotic systems is difficult, but important for economic system design. Performance analysis methods originally developed for conventional distributed computer systems are often unsatisfactory for evaluating real-time systems. The paper introduces a formal model of distributed robotic control systems; and a performance analysis method, based on scheduling theory, which can handle concurrent hard-real-time response specifications. Use of the method is illustrated by a case of remote teleoperation which assesses the effect of communication delays and the allocation of robot control functions on control system hardware requirements.

  6. Estimation of modal parameters using bilinear joint time frequency distributions

    NASA Astrophysics Data System (ADS)

    Roshan-Ghias, A.; Shamsollahi, M. B.; Mobed, M.; Behzad, M.

    2007-07-01

    In this paper, a new method is proposed for modal parameter estimation using time-frequency representations. Smoothed Pseudo Wigner-Ville distribution which is a member of the Cohen's class distributions is used to decouple vibration modes completely in order to study each mode separately. This distribution reduces cross-terms which are troublesome in Wigner-Ville distribution and retains the resolution as well. The method was applied to highly damped systems, and results were superior to those obtained via other conventional methods.

  7. Real-time modeling of heat distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamann, Hendrik F.; Li, Hongfei; Yarlanki, Srinivas

    Techniques for real-time modeling temperature distributions based on streaming sensor data are provided. In one aspect, a method for creating a three-dimensional temperature distribution model for a room having a floor and a ceiling is provided. The method includes the following steps. A ceiling temperature distribution in the room is determined. A floor temperature distribution in the room is determined. An interpolation between the ceiling temperature distribution and the floor temperature distribution is used to obtain the three-dimensional temperature distribution model for the room.

  8. Ultra-Wideband Radar Transient Detection using Time-Frequency and Wavelet Transforms.

    DTIC Science & Technology

    1992-12-01

    if p==2, mesh(flipud(abs(spdatamatrix).A2)) end 2. Wigner - Ville Distribution function P = wvd (data,winlenstep,begintheendp) % Filename: wvd.m % Title...short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution , and time-scale methods, such as the a trous...such as the short time Fourier transform (STFT), the Instantaneous Power Spectrum and the Wigner - Ville distribution [1], and time-scale methods, such

  9. Applications of fractional lower order S transform time frequency filtering algorithm to machine fault diagnosis

    PubMed Central

    Wang, Haibin; Zha, Daifeng; Li, Peng; Xie, Huicheng; Mao, Lili

    2017-01-01

    Stockwell transform(ST) time-frequency representation(ST-TFR) is a time frequency analysis method which combines short time Fourier transform with wavelet transform, and ST time frequency filtering(ST-TFF) method which takes advantage of time-frequency localized spectra can separate the signals from Gaussian noise. The ST-TFR and ST-TFF methods are used to analyze the fault signals, which is reasonable and effective in general Gaussian noise cases. However, it is proved that the mechanical bearing fault signal belongs to Alpha(α) stable distribution process(1 < α < 2) in this paper, even the noise also is α stable distribution in some special cases. The performance of ST-TFR method will degrade under α stable distribution noise environment, following the ST-TFF method fail. Hence, a new fractional lower order ST time frequency representation(FLOST-TFR) method employing fractional lower order moment and ST and inverse FLOST(IFLOST) are proposed in this paper. A new FLOST time frequency filtering(FLOST-TFF) algorithm based on FLOST-TFR method and IFLOST is also proposed, whose simplified method is presented in this paper. The discrete implementation of FLOST-TFF algorithm is deduced, and relevant steps are summarized. Simulation results demonstrate that FLOST-TFR algorithm is obviously better than the existing ST-TFR algorithm under α stable distribution noise, which can work better under Gaussian noise environment, and is robust. The FLOST-TFF method can effectively filter out α stable distribution noise, and restore the original signal. The performance of FLOST-TFF algorithm is better than the ST-TFF method, employing which mixed MSEs are smaller when α and generalized signal noise ratio(GSNR) change. Finally, the FLOST-TFR and FLOST-TFF methods are applied to analyze the outer race fault signal and extract their fault features under α stable distribution noise, where excellent performances can be shown. PMID:28406916

  10. Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution

    NASA Astrophysics Data System (ADS)

    Wang, Jianming; Liu, Lihua; Yu, Hua

    2015-12-01

    The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.

  11. Provably secure time distribution for the electric grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith IV, Amos M; Evans, Philip G; Williams, Brian P

    We demonstrate a quantum time distribution (QTD) method that combines the precision of optical timing techniques with the integrity of quantum key distribution (QKD). Critical infrastructure is dependent on microprocessor- and programmable logic-based monitoring and control systems. The distribution of timing information across the electric grid is accomplished by GPS signals which are known to be vulnerable to spoofing. We demonstrate a method for synchronizing remote clocks based on the arrival time of photons in a modifed QKD system. This has the advantage that the signal can be veried by examining the quantum states of the photons similar to QKD.

  12. A New Approach in Time-Frequency Analysis with Applications to Experimental High Range Resolution Radar Data

    DTIC Science & Technology

    2003-11-01

    Distributions In contrast to the linear time-frequency transforms such as the short-time Fourier transform, the Wigner - Ville distribution ( WVD ) is...23 9 Results of nine TFDs: (a) Wigner - Ville distribution , (b) Born-Jordan distribution , (c) Choi-Williams distribution , (d) bilinear TFD...are applied in the Wigner - Ville class of time-frequency transforms and the reassignment methods, which are applied to any time-frequency distribution

  13. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    DOEpatents

    Chassin, David P [Pasco, WA; Donnelly, Matthew K [Kennewick, WA; Dagle, Jeffery E [Richland, WA

    2011-12-06

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  14. Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices

    DOEpatents

    Chassin, David P.; Donnelly, Matthew K.; Dagle, Jeffery E.

    2006-12-12

    Electrical power distribution control methods, electrical energy demand monitoring methods, and power management devices are described. In one aspect, an electrical power distribution control method includes providing electrical energy from an electrical power distribution system, applying the electrical energy to a load, providing a plurality of different values for a threshold at a plurality of moments in time and corresponding to an electrical characteristic of the electrical energy, and adjusting an amount of the electrical energy applied to the load responsive to an electrical characteristic of the electrical energy triggering one of the values of the threshold at the respective moment in time.

  15. Real-Time Optimization and Control of Next-Generation Distribution

    Science.gov Websites

    Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation

  16. Coordinated control of active and reactive power of distribution network with distributed PV cluster via model predictive control

    NASA Astrophysics Data System (ADS)

    Ji, Yu; Sheng, Wanxing; Jin, Wei; Wu, Ming; Liu, Haitao; Chen, Feng

    2018-02-01

    A coordinated optimal control method of active and reactive power of distribution network with distributed PV cluster based on model predictive control is proposed in this paper. The method divides the control process into long-time scale optimal control and short-time scale optimal control with multi-step optimization. The models are transformed into a second-order cone programming problem due to the non-convex and nonlinear of the optimal models which are hard to be solved. An improved IEEE 33-bus distribution network system is used to analyse the feasibility and the effectiveness of the proposed control method

  17. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). In contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  18. Optimal distribution of integration time for intensity measurements in degree of linear polarization polarimetry.

    PubMed

    Li, Xiaobo; Hu, Haofeng; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie

    2016-04-04

    We consider the degree of linear polarization (DOLP) polarimetry system, which performs two intensity measurements at orthogonal polarization states to estimate DOLP. We show that if the total integration time of intensity measurements is fixed, the variance of the DOLP estimator depends on the distribution of integration time for two intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the DOLP estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time in an approximate way by employing Delta method and Lagrange multiplier method. According to the theoretical analyses and real-world experiments, it is shown that the variance of the DOLP estimator can be decreased for any value of DOLP. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improve the measurement accuracy of the polarimetry system.

  19. Radar Imaging Using The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.

    1989-12-01

    The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.

  20. An Empirical Analysis of the Cascade Secret Key Reconciliation Protocol for Quantum Key Distribution

    DTIC Science & Technology

    2011-09-01

    performance with the parity checks within each pass increasing and as a result, the processing time is expected to increase as well. A conclusion is drawn... timely manner has driven efforts to develop new key distribution methods. The most promising method is Quantum Key Distribution (QKD) and is...thank the QKD Project Team for all of the insight and support they provided in such a short time period. Thanks are especially in order for my

  1. Assessing the size distribution of droplets in a cloud chamber from light extinction data during a transient regime

    NASA Astrophysics Data System (ADS)

    Vâjâiac, Sorin Nicolae; Filip, Valeriu; Štefan, Sabina; Boscornea, Andreea

    2014-03-01

    The paper describes a method of assessing the size distribution of fog droplets in a cloud chamber, based on measuring the time variation of the transmission of a light beam during the gravitational settling of droplets. Using a model of light extinction by floating spherical particles, the size distribution of droplets is retrieved, along with characteristic structural parameters of the fog (total droplet concentration, liquid water content and effective radius). Moreover, the time variation of the effective radius can be readily extracted from the model. The errors of the method are also estimated and fall within acceptable limits. The method proves sensitive enough to resolve various modes in the droplet distribution and to point out changes in the distribution due to diverse types of aerosol present in the chamber or to the thermal condition of the fog. It is speculated that the method can be further simplified to reach an in-situ version for real-time field measurements.

  2. EMD-WVD time-frequency distribution for analysis of multi-component signals

    NASA Astrophysics Data System (ADS)

    Chai, Yunzi; Zhang, Xudong

    2016-10-01

    Time-frequency distribution (TFD) is two-dimensional function that indicates the time-varying frequency content of one-dimensional signals. And The Wigner-Ville distribution (WVD) is an important and effective time-frequency analysis method. The WVD can efficiently show the characteristic of a mono-component signal. However, a major drawback is the extra cross-terms when multi-component signals are analyzed by WVD. In order to eliminating the cross-terms, we decompose signals into single frequency components - Intrinsic Mode Function (IMF) - by using the Empirical Mode decomposition (EMD) first, then use WVD to analyze each single IMF. In this paper, we define this new time-frequency distribution as EMD-WVD. And the experiment results show that the proposed time-frequency method can solve the cross-terms problem effectively and improve the accuracy of WVD time-frequency analysis.

  3. Avalanche statistics from data with low time resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  4. Avalanche statistics from data with low time resolution

    DOE PAGES

    LeBlanc, Michael; Nawano, Aya; Wright, Wendelin J.; ...

    2016-11-22

    Extracting avalanche distributions from experimental microplasticity data can be hampered by limited time resolution. We compute the effects of low time resolution on avalanche size distributions and give quantitative criteria for diagnosing and circumventing problems associated with low time resolution. We show that traditional analysis of data obtained at low acquisition rates can lead to avalanche size distributions with incorrect power-law exponents or no power-law scaling at all. Furthermore, we demonstrate that it can lead to apparent data collapses with incorrect power-law and cutoff exponents. We propose new methods to analyze low-resolution stress-time series that can recover the size distributionmore » of the underlying avalanches even when the resolution is so low that naive analysis methods give incorrect results. We test these methods on both downsampled simulation data from a simple model and downsampled bulk metallic glass compression data and find that the methods recover the correct critical exponents.« less

  5. Distributed systems status and control

    NASA Technical Reports Server (NTRS)

    Kreidler, David; Vickers, David

    1990-01-01

    Concepts are investigated for an automated status and control system for a distributed processing environment. System characteristics, data requirements for health assessment, data acquisition methods, system diagnosis methods and control methods were investigated in an attempt to determine the high-level requirements for a system which can be used to assess the health of a distributed processing system and implement control procedures to maintain an accepted level of health for the system. A potential concept for automated status and control includes the use of expert system techniques to assess the health of the system, detect and diagnose faults, and initiate or recommend actions to correct the faults. Therefore, this research included the investigation of methods by which expert systems were developed for real-time environments and distributed systems. The focus is on the features required by real-time expert systems and the tools available to develop real-time expert systems.

  6. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE PAGES

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat; ...

    2017-09-20

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  7. Electric Power Distribution System Model Simplification Using Segment Substitution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reiman, Andrew P.; McDermott, Thomas E.; Akcakaya, Murat

    Quasi-static time-series (QSTS) simulation is used to simulate the behavior of distribution systems over long periods of time (typically hours to years). The technique involves repeatedly solving the load-flow problem for a distribution system model and is useful for distributed energy resource (DER) planning. When a QSTS simulation has a small time step and a long duration, the computational burden of the simulation can be a barrier to integration into utility workflows. One way to relieve the computational burden is to simplify the system model. The segment substitution method of simplifying distribution system models introduced in this paper offers modelmore » bus reduction of up to 98% with a simplification error as low as 0.2% (0.002 pu voltage). Finally, in contrast to existing methods of distribution system model simplification, which rely on topological inspection and linearization, the segment substitution method uses black-box segment data and an assumed simplified topology.« less

  8. Nonparametric methods for analyzing recurrent gap time data with application to infections after hematopoietic cell transplant.

    PubMed

    Lee, Chi Hyun; Luo, Xianghua; Huang, Chiung-Yu; DeFor, Todd E; Brunstein, Claudio G; Weisdorf, Daniel J

    2016-06-01

    Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this article, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. © 2015, The International Biometric Society.

  9. Nonparametric methods for analyzing recurrent gap time data with application to infections after hematopoietic cell transplant

    PubMed Central

    Lee, Chi Hyun; Huang, Chiung-Yu; DeFor, Todd E.; Brunstein, Claudio G.; Weisdorf, Daniel J.

    2015-01-01

    Summary Infection is one of the most common complications after hematopoietic cell transplantation. Many patients experience infectious complications repeatedly after transplant. Existing statistical methods for recurrent gap time data typically assume that patients are enrolled due to the occurrence of an event of interest, and subsequently experience recurrent events of the same type; moreover, for one-sample estimation, the gap times between consecutive events are usually assumed to be identically distributed. Applying these methods to analyze the post-transplant infection data will inevitably lead to incorrect inferential results because the time from transplant to the first infection has a different biological meaning than the gap times between consecutive recurrent infections. Some unbiased yet inefficient methods include univariate survival analysis methods based on data from the first infection or bivariate serial event data methods based on the first and second infections. In this paper, we propose a nonparametric estimator of the joint distribution of time from transplant to the first infection and the gap times between consecutive infections. The proposed estimator takes into account the potentially different distributions of the two types of gap times and better uses the recurrent infection data. Asymptotic properties of the proposed estimators are established. PMID:26575402

  10. Parallel computing method for simulating hydrological processesof large rivers under climate change

    NASA Astrophysics Data System (ADS)

    Wang, H.; Chen, Y.

    2016-12-01

    Climate change is one of the proverbial global environmental problems in the world.Climate change has altered the watershed hydrological processes in time and space distribution, especially in worldlarge rivers.Watershed hydrological process simulation based on physically based distributed hydrological model can could have better results compared with the lumped models.However, watershed hydrological process simulation includes large amount of calculations, especially in large rivers, thus needing huge computing resources that may not be steadily available for the researchers or at high expense, this seriously restricted the research and application. To solve this problem, the current parallel method are mostly parallel computing in space and time dimensions.They calculate the natural features orderly thatbased on distributed hydrological model by grid (unit, a basin) from upstream to downstream.This articleproposes ahigh-performancecomputing method of hydrological process simulation with high speedratio and parallel efficiency.It combinedthe runoff characteristics of time and space of distributed hydrological model withthe methods adopting distributed data storage, memory database, distributed computing, parallel computing based on computing power unit.The method has strong adaptability and extensibility,which means it canmake full use of the computing and storage resources under the condition of limited computing resources, and the computing efficiency can be improved linearly with the increase of computing resources .This method can satisfy the parallel computing requirements ofhydrological process simulation in small, medium and large rivers.

  11. Self spectrum window method in wigner-ville distribution.

    PubMed

    Liu, Zhongguo; Liu, Changchun; Liu, Boqiang; Lv, Yangsheng; Lei, Yinsheng; Yu, Mengsun

    2005-01-01

    Wigner-Ville distribution (WVD) is an important type of time-frequency analysis in biomedical signal processing. The cross-term interference in WVD has a disadvantageous influence on its application. In this research, the Self Spectrum Window (SSW) method was put forward to suppress the cross-term interference, based on the fact that the cross-term and auto-WVD- terms in integral kernel function are orthogonal. With the Self Spectrum Window (SSW) algorithm, a real auto-WVD function was used as a template to cross-correlate with the integral kernel function, and the Short Time Fourier Transform (STFT) spectrum of the signal was used as window function to process the WVD in time-frequency plane. The SSW method was confirmed by computer simulation with good analysis results. Satisfactory time- frequency distribution was obtained.

  12. Bilinear Time-frequency Analysis for Lamb Wave Signal Detected by Electromagnetic Acoustic Transducer

    NASA Astrophysics Data System (ADS)

    Sun, Wenxiu; Liu, Guoqiang; Xia, Hui; Xia, Zhengwu

    2018-03-01

    Accurate acquisition of the detection signal travel time plays a very important role in cross-hole tomography. The experimental platform of aluminum plate under the perpendicular magnetic field is established and the bilinear time-frequency analysis methods, Wigner-Ville Distribution (WVD) and the pseudo-Wigner-Ville distribution (PWVD), are applied to analyse the Lamb wave signals detected by electromagnetic acoustic transducer (EMAT). By extracting the same frequency component of the time-frequency spectrum as the excitation frequency, the travel time information can be obtained. In comparison with traditional linear time-frequency analysis method such as short-time Fourier transform (STFT), the bilinear time-frequency analysis method PWVD is more appropriate in extracting travel time and recognizing patterns of Lamb wave.

  13. Uncertainty propagation for statistical impact prediction of space debris

    NASA Astrophysics Data System (ADS)

    Hoogendoorn, R.; Mooij, E.; Geul, J.

    2018-01-01

    Predictions of the impact time and location of space debris in a decaying trajectory are highly influenced by uncertainties. The traditional Monte Carlo (MC) method can be used to perform accurate statistical impact predictions, but requires a large computational effort. A method is investigated that directly propagates a Probability Density Function (PDF) in time, which has the potential to obtain more accurate results with less computational effort. The decaying trajectory of Delta-K rocket stages was used to test the methods using a six degrees-of-freedom state model. The PDF of the state of the body was propagated in time to obtain impact-time distributions. This Direct PDF Propagation (DPP) method results in a multi-dimensional scattered dataset of the PDF of the state, which is highly challenging to process. No accurate results could be obtained, because of the structure of the DPP data and the high dimensionality. Therefore, the DPP method is less suitable for practical uncontrolled entry problems and the traditional MC method remains superior. Additionally, the MC method was used with two improved uncertainty models to obtain impact-time distributions, which were validated using observations of true impacts. For one of the two uncertainty models, statistically more valid impact-time distributions were obtained than in previous research.

  14. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE PAGES

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...

    2018-04-30

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  15. Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.

    The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less

  16. Modeling highway travel time distribution with conditional probability models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oliveira Neto, Francisco Moraes; Chin, Shih-Miao; Hwang, Ho-Ling

    ABSTRACT Under the sponsorship of the Federal Highway Administration's Office of Freight Management and Operations, the American Transportation Research Institute (ATRI) has developed performance measures through the Freight Performance Measures (FPM) initiative. Under this program, travel speed information is derived from data collected using wireless based global positioning systems. These telemetric data systems are subscribed and used by trucking industry as an operations management tool. More than one telemetric operator submits their data dumps to ATRI on a regular basis. Each data transmission contains truck location, its travel time, and a clock time/date stamp. Data from the FPM program providesmore » a unique opportunity for studying the upstream-downstream speed distributions at different locations, as well as different time of the day and day of the week. This research is focused on the stochastic nature of successive link travel speed data on the continental United States Interstates network. Specifically, a method to estimate route probability distributions of travel time is proposed. This method uses the concepts of convolution of probability distributions and bivariate, link-to-link, conditional probability to estimate the expected distributions for the route travel time. Major contribution of this study is the consideration of speed correlation between upstream and downstream contiguous Interstate segments through conditional probability. The established conditional probability distributions, between successive segments, can be used to provide travel time reliability measures. This study also suggests an adaptive method for calculating and updating route travel time distribution as new data or information is added. This methodology can be useful to estimate performance measures as required by the recent Moving Ahead for Progress in the 21st Century Act (MAP 21).« less

  17. Maximum likelihood estimation for life distributions with competing failure modes

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1979-01-01

    Systems which are placed on test at time zero, function for a period and die at some random time were studied. Failure may be due to one of several causes or modes. The parameters of the life distribution may depend upon the levels of various stress variables the item is subject to. Maximum likelihood estimation methods are discussed. Specific methods are reported for the smallest extreme-value distributions of life. Monte-Carlo results indicate the methods to be promising. Under appropriate conditions, the location parameters are nearly unbiased, the scale parameter is slight biased, and the asymptotic covariances are rapidly approached.

  18. Survival curve estimation with dependent left truncated data using Cox's model.

    PubMed

    Mackenzie, Todd

    2012-10-19

    The Kaplan-Meier and closely related Lynden-Bell estimators are used to provide nonparametric estimation of the distribution of a left-truncated random variable. These estimators assume that the left-truncation variable is independent of the time-to-event. This paper proposes a semiparametric method for estimating the marginal distribution of the time-to-event that does not require independence. It models the conditional distribution of the time-to-event given the truncation variable using Cox's model for left truncated data, and uses inverse probability weighting. We report the results of simulations and illustrate the method using a survival study.

  19. Novel characterization method of impedance cardiography signals using time-frequency distributions.

    PubMed

    Escrivá Muñoz, Jesús; Pan, Y; Ge, S; Jensen, E W; Vallverdú, M

    2018-03-16

    The purpose of this document is to describe a methodology to select the most adequate time-frequency distribution (TFD) kernel for the characterization of impedance cardiography signals (ICG). The predominant ICG beat was extracted from a patient and was synthetized using time-frequency variant Fourier approximations. These synthetized signals were used to optimize several TFD kernels according to a performance maximization. The optimized kernels were tested for noise resistance on a clinical database. The resulting optimized TFD kernels are presented with their performance calculated using newly proposed methods. The procedure explained in this work showcases a new method to select an appropriate kernel for ICG signals and compares the performance of different time-frequency kernels found in the literature for the case of ICG signals. We conclude that, for ICG signals, the performance (P) of the spectrogram with either Hanning or Hamming windows (P = 0.780) and the extended modified beta distribution (P = 0.765) provided similar results, higher than the rest of analyzed kernels. Graphical abstract Flowchart for the optimization of time-frequency distribution kernels for impedance cardiography signals.

  20. Velocity distribution of electrons in time-varying low-temperature plasmas: progress in theoretical procedures over the past 70 years

    NASA Astrophysics Data System (ADS)

    Makabe, Toshiaki

    2018-03-01

    A time-varying low-temperature plasma sustained by electrical powers with various kinds of fRequencies has played a key role in the historical development of new technologies, such as gas lasers, ozonizers, micro display panels, dry processing of materials, medical care, and so on, since World War II. Electrons in a time-modulated low-temperature plasma have a proper velocity spectrum, i.e. velocity distribution dependent on the microscopic quantum characteristics of the feed gas molecule and on the external field strength and the frequency. In order to solve and evaluate the time-varying velocity distribution, we have mostly two types of theoretical methods based on the classical and linear Boltzmann equations, namely, the expansion method using the orthogonal function and the procedure of non-expansional temporal evolution. Both methods have been developed discontinuously and progressively in synchronization with those technological developments. In this review, we will explore the historical development of the theoretical procedure to evaluate the electron velocity distribution in a time-varying low-temperature plasma over the past 70 years.

  1. Optimal distribution of integration time for intensity measurements in Stokes polarimetry.

    PubMed

    Li, Xiaobo; Liu, Tiegen; Huang, Bingjing; Song, Zhanjie; Hu, Haofeng

    2015-10-19

    We consider the typical Stokes polarimetry system, which performs four intensity measurements to estimate a Stokes vector. We show that if the total integration time of intensity measurements is fixed, the variance of the Stokes vector estimator depends on the distribution of the integration time at four intensity measurements. Therefore, by optimizing the distribution of integration time, the variance of the Stokes vector estimator can be decreased. In this paper, we obtain the closed-form solution of the optimal distribution of integration time by employing Lagrange multiplier method. According to the theoretical analysis and real-world experiment, it is shown that the total variance of the Stokes vector estimator can be significantly decreased about 40% in the case discussed in this paper. The method proposed in this paper can effectively decrease the measurement variance and thus statistically improves the measurement accuracy of the polarimetric system.

  2. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Tan, Swie-In [San Jose, CA; Reiss, Ira [New City, NY

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  3. An Analysis of Variance Approach for the Estimation of Response Time Distributions in Tests

    ERIC Educational Resources Information Center

    Attali, Yigal

    2010-01-01

    Generalizability theory and analysis of variance methods are employed, together with the concept of objective time pressure, to estimate response time distributions and the degree of time pressure in timed tests. By estimating response time variance components due to person, item, and their interaction, and fixed effects due to item types and…

  4. Dynamical Aspects of Quasifission Process in Heavy-Ion Reactions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2015-06-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. For all the reactions the main component of the distributions corresponds to asymmetrical mass division typical for asymmetric quasifission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  5. Method and system using power modulation and velocity modulation producing sputtered thin films with sub-angstrom thickness uniformity or custom thickness gradients

    DOEpatents

    Montcalm, Claude [Livermore, CA; Folta, James Allen [Livermore, CA; Walton, Christopher Charles [Berkeley, CA

    2003-12-23

    A method and system for determining a source flux modulation recipe for achieving a selected thickness profile of a film to be deposited (e.g., with highly uniform or highly accurate custom graded thickness) over a flat or curved substrate (such as concave or convex optics) by exposing the substrate to a vapor deposition source operated with time-varying flux distribution as a function of time. Preferably, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. Preferably, the method includes the steps of measuring the source flux distribution (using a test piece held stationary while exposed to the source with the source operated at each of a number of different applied power levels), calculating a set of predicted film thickness profiles, each film thickness profile assuming the measured flux distribution and a different one of a set of source flux modulation recipes, and determining from the predicted film thickness profiles a source flux modulation recipe which is adequate to achieve a predetermined thickness profile. Aspects of the invention include a computer-implemented method employing a graphical user interface to facilitate convenient selection of an optimal or nearly optimal source flux modulation recipe to achieve a desired thickness profile on a substrate. The method enables precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  6. Comparision of the different probability distributions for earthquake hazard assessment in the North Anatolian Fault Zone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yilmaz, Şeyda, E-mail: seydayilmaz@ktu.edu.tr; Bayrak, Erdem, E-mail: erdmbyrk@gmail.com; Bayrak, Yusuf, E-mail: bayrak@ktu.edu.tr

    In this study we examined and compared the three different probabilistic distribution methods for determining the best suitable model in probabilistic assessment of earthquake hazards. We analyzed a reliable homogeneous earthquake catalogue between a time period 1900-2015 for magnitude M ≥ 6.0 and estimated the probabilistic seismic hazard in the North Anatolian Fault zone (39°-41° N 30°-40° E) using three distribution methods namely Weibull distribution, Frechet distribution and three-parameter Weibull distribution. The distribution parameters suitability was evaluated Kolmogorov-Smirnov (K-S) goodness-of-fit test. We also compared the estimated cumulative probability and the conditional probabilities of occurrence of earthquakes for different elapsed timemore » using these three distribution methods. We used Easyfit and Matlab software to calculate these distribution parameters and plotted the conditional probability curves. We concluded that the Weibull distribution method was the most suitable than other distribution methods in this region.« less

  7. Time difference of arrival estimation of microseismic signals based on alpha-stable distribution

    NASA Astrophysics Data System (ADS)

    Jia, Rui-Sheng; Gong, Yue; Peng, Yan-Jun; Sun, Hong-Mei; Zhang, Xing-Li; Lu, Xin-Ming

    2018-05-01

    Microseismic signals are generally considered to follow the Gauss distribution. A comparison of the dynamic characteristics of sample variance and the symmetry of microseismic signals with the signals which follow α-stable distribution reveals that the microseismic signals have obvious pulse characteristics and that the probability density curve of the microseismic signal is approximately symmetric. Thus, the hypothesis that microseismic signals follow the symmetric α-stable distribution is proposed. On the premise of this hypothesis, the characteristic exponent α of the microseismic signals is obtained by utilizing the fractional low-order statistics, and then a new method of time difference of arrival (TDOA) estimation of microseismic signals based on fractional low-order covariance (FLOC) is proposed. Upon applying this method to the TDOA estimation of Ricker wavelet simulation signals and real microseismic signals, experimental results show that the FLOC method, which is based on the assumption of the symmetric α-stable distribution, leads to enhanced spatial resolution of the TDOA estimation relative to the generalized cross correlation (GCC) method, which is based on the assumption of the Gaussian distribution.

  8. Burst wait time simulation of CALIBAN reactor at delayed super-critical state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.; Authier, N.; Richard, B.

    2012-07-01

    In the past, the super prompt critical wait time probability distribution was measured on CALIBAN fast burst reactor [4]. Afterwards, these experiments were simulated with a very good agreement by solving the non-extinction probability equation [5]. Recently, the burst wait time probability distribution has been measured at CEA-Valduc on CALIBAN at different delayed super-critical states [6]. However, in the delayed super-critical case the non-extinction probability does not give access to the wait time distribution. In this case it is necessary to compute the time dependent evolution of the full neutron count number probability distribution. In this paper we present themore » point model deterministic method used to calculate the probability distribution of the wait time before a prescribed count level taking into account prompt neutrons and delayed neutron precursors. This method is based on the solution of the time dependent adjoint Kolmogorov master equations for the number of detections using the generating function methodology [8,9,10] and inverse discrete Fourier transforms. The obtained results are then compared to the measurements and Monte-Carlo calculations based on the algorithm presented in [7]. (authors)« less

  9. Feedback-Based Projected-Gradient Method for Real-Time Optimization of Aggregations of Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Bernstein, Andrey; Simonetto, Andrea

    This paper develops an online optimization method to maximize operational objectives of distribution-level distributed energy resources (DERs), while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power flows, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claimsmore » are established in terms of tracking of the solution of a well-posed time-varying convex optimization problem.« less

  10. Feedback-Based Projected-Gradient Method For Real-Time Optimization of Aggregations of Energy Resources: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Bernstein, Andrey; Simonetto, Andrea

    This paper develops an online optimization method to maximize the operational objectives of distribution-level distributed energy resources (DERs) while adjusting the aggregate power generated (or consumed) in response to services requested by grid operators. The design of the online algorithm is based on a projected-gradient method, suitably modified to accommodate appropriate measurements from the distribution network and the DERs. By virtue of this approach, the resultant algorithm can cope with inaccuracies in the representation of the AC power, it avoids pervasive metering to gather the state of noncontrollable resources, and it naturally lends itself to a distributed implementation. Optimality claimsmore » are established in terms of tracking of the solution of a well-posed time-varying optimization problem.« less

  11. Reliability analysis of component of affination centrifugal 1 machine by using reliability engineering

    NASA Astrophysics Data System (ADS)

    Sembiring, N.; Ginting, E.; Darnello, T.

    2017-12-01

    Problems that appear in a company that produces refined sugar, the production floor has not reached the level of critical machine availability because it often suffered damage (breakdown). This results in a sudden loss of production time and production opportunities. This problem can be solved by Reliability Engineering method where the statistical approach to historical damage data is performed to see the pattern of the distribution. The method can provide a value of reliability, rate of damage, and availability level, of an machine during the maintenance time interval schedule. The result of distribution test to time inter-damage data (MTTF) flexible hose component is lognormal distribution while component of teflon cone lifthing is weibull distribution. While from distribution test to mean time of improvement (MTTR) flexible hose component is exponential distribution while component of teflon cone lifthing is weibull distribution. The actual results of the flexible hose component on the replacement schedule per 720 hours obtained reliability of 0.2451 and availability 0.9960. While on the critical components of teflon cone lifthing actual on the replacement schedule per 1944 hours obtained reliability of 0.4083 and availability 0.9927.

  12. A Simple Method for Estimating Informative Node Age Priors for the Fossil Calibration of Molecular Divergence Time Analyses

    PubMed Central

    Nowak, Michael D.; Smith, Andrew B.; Simpson, Carl; Zwickl, Derrick J.

    2013-01-01

    Molecular divergence time analyses often rely on the age of fossil lineages to calibrate node age estimates. Most divergence time analyses are now performed in a Bayesian framework, where fossil calibrations are incorporated as parametric prior probabilities on node ages. It is widely accepted that an ideal parameterization of such node age prior probabilities should be based on a comprehensive analysis of the fossil record of the clade of interest, but there is currently no generally applicable approach for calculating such informative priors. We provide here a simple and easily implemented method that employs fossil data to estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade, which can be used to fit an informative parametric prior probability distribution on a node age. Specifically, our method uses the extant diversity and the stratigraphic distribution of fossil lineages confidently assigned to a clade to fit a branching model of lineage diversification. Conditioning this on a simple model of fossil preservation, we estimate the likely amount of missing history prior to the oldest fossil occurrence of a clade. The likelihood surface of missing history can then be translated into a parametric prior probability distribution on the age of the clade of interest. We show that the method performs well with simulated fossil distribution data, but that the likelihood surface of missing history can at times be too complex for the distribution-fitting algorithm employed by our software tool. An empirical example of the application of our method is performed to estimate echinoid node ages. A simulation-based sensitivity analysis using the echinoid data set shows that node age prior distributions estimated under poor preservation rates are significantly less informative than those estimated under high preservation rates. PMID:23755303

  13. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems.

    DTIC Science & Technology

    1997-09-30

    set of software tools for specification and verification of distributed real time systems using formal methods. The task of this SBIR Phase II effort...to be used by designers of real - time systems for early detection of errors. The mathematical complexity of formal specification and verification has

  14. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  15. The time scale of quasifission process in reactions with heavy ions

    NASA Astrophysics Data System (ADS)

    Knyazheva, G. N.; Itkis, I. M.; Kozulin, E. M.

    2014-05-01

    The study of mass-energy distributions of binary fragments obtained in the reactions of 36S, 48Ca, 58Fe and 64Ni ions with the 232Th, 238U, 244Pu and 248Cm at energies below and above the Coulomb barrier is presented. These data have been measured by two time-of-flight CORSET spectrometer. The mass resolution of the spectrometer for these measurements was about 3u. It allows to investigate the features of mass distributions with good accuracy. The properties of mass and TKE of QF fragments in dependence on interaction energy have been investigated and compared with characteristics of the fusion-fission process. To describe the quasifission mass distribution the simple method has been proposed. This method is based on the driving potential of the system and time dependent mass drift. This procedure allows to estimate QF time scale from the measured mass distributions. It has been found that the QF time exponentially decreases when the reaction Coulomb factor Z1Z2 increases.

  16. Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Tian, Xiaolin; He, Xiaoliang; Song, Xiaojun; Xue, Liang; Liu, Cheng; Wang, Shouyu

    2016-08-01

    Microscopy based on transport of intensity equation provides quantitative phase distributions which opens another perspective for cellular observations. However, it requires multi-focal image capturing while mechanical and electrical scanning limits its real time capacity in sample detections. Here, in order to break through this restriction, real time quantitative phase microscopy based on single-shot transport of the intensity equation method is proposed. A programmed phase mask is designed to realize simultaneous multi-focal image recording without any scanning; thus, phase distributions can be quantitatively retrieved in real time. It is believed the proposed method can be potentially applied in various biological and medical applications, especially for live cell imaging.

  17. Development of optimal models of porous media by combining static and dynamic data: the permeability and porosity distributions.

    PubMed

    Hamzehpour, Hossein; Rasaei, M Reza; Sahimi, Muhammad

    2007-05-01

    We describe a method for the development of the optimal spatial distributions of the porosity phi and permeability k of a large-scale porous medium. The optimal distributions are constrained by static and dynamic data. The static data that we utilize are limited data for phi and k, which the method honors in the optimal model and utilizes their correlation functions in the optimization process. The dynamic data include the first-arrival (FA) times, at a number of receivers, of seismic waves that have propagated in the porous medium, and the time-dependent production rates of a fluid that flows in the medium. The method combines the simulated-annealing method with a simulator that solves numerically the three-dimensional (3D) acoustic wave equation and computes the FA times, and a second simulator that solves the 3D governing equation for the fluid's pressure as a function of time. To our knowledge, this is the first time that an optimization method has been developed to determine simultaneously the global minima of two distinct total energy functions. As a stringent test of the method's accuracy, we solve for flow of two immiscible fluids in the same porous medium, without using any data for the two-phase flow problem in the optimization process. We show that the optimal model, in addition to honoring the data, also yields accurate spatial distributions of phi and k, as well as providing accurate quantitative predictions for the single- and two-phase flow problems. The efficiency of the computations is discussed in detail.

  18. Wigner time-delay distribution in chaotic cavities and freezing transition.

    PubMed

    Texier, Christophe; Majumdar, Satya N

    2013-06-21

    Using the joint distribution for proper time delays of a chaotic cavity derived by Brouwer, Frahm, and Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of the large number of channels N, the large deviation function for the distribution of the Wigner time delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions, related to a (second order) freezing transition in the Coulomb gas.

  19. Computing travel time when the exact address is unknown: a comparison of point and polygon ZIP code approximation methods.

    PubMed

    Berke, Ethan M; Shi, Xun

    2009-04-29

    Travel time is an important metric of geographic access to health care. We compared strategies of estimating travel times when only subject ZIP code data were available. Using simulated data from New Hampshire and Arizona, we estimated travel times to nearest cancer centers by using: 1) geometric centroid of ZIP code polygons as origins, 2) population centroids as origin, 3) service area rings around each cancer center, assigning subjects to rings by assuming they are evenly distributed within their ZIP code, 4) service area rings around each center, assuming the subjects follow the population distribution within the ZIP code. We used travel times based on street addresses as true values to validate estimates. Population-based methods have smaller errors than geometry-based methods. Within categories (geometry or population), centroid and service area methods have similar errors. Errors are smaller in urban areas than in rural areas. Population-based methods are superior to the geometry-based methods, with the population centroid method appearing to be the best choice for estimating travel time. Estimates in rural areas are less reliable.

  20. Application of Statistical Methods of Rain Rate Estimation to Data From The TRMM Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Jones, J. A.; Iguchi, T.; Okamoto, K.; Liao, L.; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    The TRMM Precipitation Radar is well suited to statistical methods in that the measurements over any given region are sparsely sampled in time. Moreover, the instantaneous rain rate estimates are often of limited accuracy at high rain rates because of attenuation effects and at light rain rates because of receiver sensitivity. For the estimation of the time-averaged rain characteristics over an area both errors are relevant. By enlarging the space-time region over which the data are collected, the sampling error can be reduced. However. the bias and distortion of the estimated rain distribution generally will remain if estimates at the high and low rain rates are not corrected. In this paper we use the TRMM PR data to investigate the behavior of 2 statistical methods the purpose of which is to estimate the rain rate over large space-time domains. Examination of large-scale rain characteristics provides a useful starting point. The high correlation between the mean and standard deviation of rain rate implies that the conditional distribution of this quantity can be approximated by a one-parameter distribution. This property is used to explore the behavior of the area-time-integral (ATI) methods where fractional area above a threshold is related to the mean rain rate. In the usual application of the ATI method a correlation is established between these quantities. However, if a particular form of the rain rate distribution is assumed and if the ratio of the mean to standard deviation is known, then not only the mean but the full distribution can be extracted from a measurement of fractional area above a threshold. The second method is an extension of this idea where the distribution is estimated from data over a range of rain rates chosen in an intermediate range where the effects of attenuation and poor sensitivity can be neglected. The advantage of estimating the distribution itself rather than the mean value is that it yields the fraction of rain contributed by the light and heavy rain rates. This is useful in estimating the fraction of rainfall contributed by the rain rates that go undetected by the radar. The results at high rain rates provide a cross-check on the usual attenuation correction methods that are applied at the highest resolution of the instrument.

  1. Stepwise inference of likely dynamic flux distributions from metabolic time series data.

    PubMed

    Faraji, Mojdeh; Voit, Eberhard O

    2017-07-15

    Most metabolic pathways contain more reactions than metabolites and therefore have a wide stoichiometric matrix that corresponds to infinitely many possible flux distributions that are perfectly compatible with the dynamics of the metabolites in a given dataset. This under-determinedness poses a challenge for the quantitative characterization of flux distributions from time series data and thus for the design of adequate, predictive models. Here we propose a method that reduces the degrees of freedom in a stepwise manner and leads to a dynamic flux distribution that is, in a statistical sense, likely to be close to the true distribution. We applied the proposed method to the lignin biosynthesis pathway in switchgrass. The system consists of 16 metabolites and 23 enzymatic reactions. It has seven degrees of freedom and therefore admits a large space of dynamic flux distributions that all fit a set of metabolic time series data equally well. The proposed method reduces this space in a systematic and biologically reasonable manner and converges to a likely dynamic flux distribution in just a few iterations. The estimated solution and the true flux distribution, which is known in this case, show excellent agreement and thereby lend support to the method. The computational model was implemented in MATLAB (version R2014a, The MathWorks, Natick, MA). The source code is available at https://github.gatech.edu/VoitLab/Stepwise-Inference-of-Likely-Dynamic-Flux-Distributions and www.bst.bme.gatech.edu/research.php . mojdeh@gatech.edu or eberhard.voit@bme.gatech.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Eliminating the influence of source spectrum of white light scanning interferometry through time-delay estimation algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu

    2017-05-01

    In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.

  3. Lattice QCD exploration of parton pseudo-distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  4. Lattice QCD exploration of parton pseudo-distribution functions

    DOE PAGES

    Orginos, Kostas; Radyushkin, Anatoly; Karpie, Joseph; ...

    2017-11-08

    Here, we demonstrate a new method of extracting parton distributions from lattice calculations. The starting idea is to treat the generic equal-time matrix elementmore » $${\\cal M} (Pz_3, z_3^2)$$ as a function of the Ioffe time $$\

  5. Optimal Power Flow Pursuit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea

    This paper considers distribution networks featuring inverter-interfaced distributed energy resources, and develops distributed feedback controllers that continuously drive the inverter output powers to solutions of AC optimal power flow (OPF) problems. Particularly, the controllers update the power setpoints based on voltage measurements as well as given (time-varying) OPF targets, and entail elementary operations implementable onto low-cost microcontrollers that accompany power-electronics interfaces of gateways and inverters. The design of the control framework is based on suitable linear approximations of the AC power-flow equations as well as Lagrangian regularization methods. Convergence and OPF-target tracking capabilities of the controllers are analytically established. Overall,more » the proposed method allows to bypass traditional hierarchical setups where feedback control and optimization operate at distinct time scales, and to enable real-time optimization of distribution systems.« less

  6. Exponential blocking-temperature distribution in ferritin extracted from magnetization measurements

    NASA Astrophysics Data System (ADS)

    Lee, T. H.; Choi, K.-Y.; Kim, G.-H.; Suh, B. J.; Jang, Z. H.

    2014-11-01

    We developed a direct method to extract the zero-field zero-temperature anisotropy energy barrier distribution of magnetic particles in the form of a blocking-temperature distribution. The key idea is to modify measurement procedures slightly to make nonequilibrium magnetization calculations (including the time evolution of magnetization) easier. We applied this method to the biomagnetic molecule ferritin and successfully reproduced field-cool magnetization by using the extracted distribution. We find that the resulting distribution is more like an exponential type and that the distribution cannot be correlated simply to the widely known log-normal particle-size distribution. The method also allows us to determine the values of the zero-temperature coercivity and Bloch coefficient, which are in good agreement with those determined from other techniques.

  7. A Comparison of Two Tree Construction Methods for Obtaining Proximity Measures among Words. Number 47.

    ERIC Educational Resources Information Center

    Rapoport, Amnon

    The prediction that two different methods of constructing linear, tree graphs will yield the same formal structure of semantic space and measurement of word proximity was tested by comparing the distribution of node degree, the distribution of the number of pairs of nodes connected y times, and the distribution of adjective degree in trees…

  8. Intercoalescence time distribution of incomplete gene genealogies in temporally varying populations, and applications in population genetic inference.

    PubMed

    Chen, Hua

    2013-03-01

    Tracing back to a specific time T in the past, the genealogy of a sample of haplotypes may not have reached their common ancestor and may leave m lineages extant. For such an incomplete genealogy truncated at a specific time T in the past, the distribution and expectation of the intercoalescence times conditional on T are derived in an exact form in this paper for populations of deterministically time-varying sizes, specifically, for populations growing exponentially. The derived intercoalescence time distribution can be integrated to the coalescent-based joint allele frequency spectrum (JAFS) theory, and is useful for population genetic inference from large-scale genomic data, without relying on computationally intensive approaches, such as importance sampling and Markov Chain Monte Carlo (MCMC) methods. The inference of several important parameters relying on this derived conditional distribution is demonstrated: quantifying population growth rate and onset time, and estimating the number of ancestral lineages at a specific ancient time. Simulation studies confirm validity of the derivation and statistical efficiency of the methods using the derived intercoalescence time distribution. Two examples of real data are given to show the inference of the population growth rate of a European sample from the NIEHS Environmental Genome Project, and the number of ancient lineages of 31 mitochondrial genomes from Tibetan populations. © 2013 Blackwell Publishing Ltd/University College London.

  9. Euthanasia Method for Mice in Rapid Time-Course Pulmonary Pharmacokinetic Studies

    PubMed Central

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-01-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine–xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine–xylazine, CO2 gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine–xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine–xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique. PMID:19807971

  10. Euthanasia method for mice in rapid time-course pulmonary pharmacokinetic studies.

    PubMed

    Schoell, Adam R; Heyde, Bruce R; Weir, Dana E; Chiang, Po-Chang; Hu, Yiding; Tung, David K

    2009-09-01

    To develop a means of euthanasia to support rapid time-course pharmacokinetic studies in mice, we compared retroorbital and intravenous lateral tail vein injection of ketamine-xylazine with regard to preparation time, utility, tissue distribution, and time to onset of euthanasia. Tissue distribution and time to onset of euthanasia did not differ between administration methods. However, retroorbital injection could be performed more rapidly than intravenous injection and was considered to be a technically simple and superior alternative for mouse euthanasia. Retroorbital ketamine-xylazine, CO(2) gas, and intraperitoneal pentobarbital then were compared as euthanasia agents in a rapid time-point pharmacokinetic study. Retroorbital ketamine-xylazine was the most efficient and consistent of the 3 methods, with an average time to death of approximately 5 s after injection. In addition, euthanasia by retroorbital ketamine-xylazine enabled accurate sample collection at closely spaced time points and satisfied established criteria for acceptable euthanasia technique.

  11. Rapid optimization of tension distribution for cable-driven parallel manipulators with redundant cables

    NASA Astrophysics Data System (ADS)

    Ouyang, Bo; Shang, Weiwei

    2016-03-01

    The solution of tension distributions is infinite for cable-driven parallel manipulators(CDPMs) with redundant cables. A rapid optimization method for determining the optimal tension distribution is presented. The new optimization method is primarily based on the geometry properties of a polyhedron and convex analysis. The computational efficiency of the optimization method is improved by the designed projection algorithm, and a fast algorithm is proposed to determine which two of the lines are intersected at the optimal point. Moreover, a method for avoiding the operating point on the lower tension limit is developed. Simulation experiments are implemented on a six degree-of-freedom(6-DOF) CDPM with eight cables, and the results indicate that the new method is one order of magnitude faster than the standard simplex method. The optimal distribution of tension distribution is thus rapidly established on real-time by the proposed method.

  12. Multiplication and Presence of Shielding Material from Time-Correlated Pulse-Height Measurements of Subcritical Plutonium Assemblies

    DOE PAGES

    Monterial, Mateusz; Marleau, Peter; Paff, Marc; ...

    2017-01-20

    Here, we present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with bothmore » multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.« less

  13. Multiplication and Presence of Shielding Material from Time-Correlated Pulse-Height Measurements of Subcritical Plutonium Assemblies

    NASA Astrophysics Data System (ADS)

    Monterial, Mateusz; Marleau, Peter; Paff, Marc; Clarke, Shaun; Pozzi, Sara

    2017-04-01

    We present the results from the first measurements of the Time-Correlated Pulse-Height (TCPH) distributions from 4.5 kg sphere of α-phase weapons-grade plutonium metal in five configurations: bare, reflected by 1.27 cm and 2.54 cm of tungsten, and 2.54 cm and 7.62 cm of polyethylene. A new method for characterizing source multiplication and shielding configuration is also demonstrated. The method relies on solving for the underlying fission chain timing distribution that drives the spreading of the measured TCPH distribution. We found that a gamma distribution fits the fission chain timing distribution well and that the fit parameters correlate with both multiplication (rate parameter) and shielding material types (shape parameter). The source-to-detector distance was another free parameter that we were able to optimize, and proved to be the most well constrained parameter. MCNPX-PoliMi simulations were used to complement the measurements and help illustrate trends in these parameters and their relation to multiplication and the amount and type of material coupled to the subcritical assembly.

  14. Dynamic Singularity Spectrum Distribution of Sea Clutter

    NASA Astrophysics Data System (ADS)

    Xiong, Gang; Yu, Wenxian; Zhang, Shuning

    2015-12-01

    The fractal and multifractal theory have provided new approaches for radar signal processing and target-detecting under the background of ocean. However, the related research mainly focuses on fractal dimension or multifractal spectrum (MFS) of sea clutter. In this paper, a new dynamic singularity analysis method of sea clutter using MFS distribution is developed, based on moving detrending analysis (DMA-MFSD). Theoretically, we introduce the time information by using cyclic auto-correlation of sea clutter. For transient correlation series, the instantaneous singularity spectrum based on multifractal detrending moving analysis (MF-DMA) algorithm is calculated, and the dynamic singularity spectrum distribution of sea clutter is acquired. In addition, we analyze the time-varying singularity exponent ranges and maximum position function in DMA-MFSD of sea clutter. For the real sea clutter data, we analyze the dynamic singularity spectrum distribution of real sea clutter in level III sea state, and conclude that the radar sea clutter has the non-stationary and time-varying scale characteristic and represents the time-varying singularity spectrum distribution based on the proposed DMA-MFSD method. The DMA-MFSD will also provide reference for nonlinear dynamics and multifractal signal processing.

  15. Chemical Continuous Time Random Walks

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Dentz, M.

    2017-12-01

    Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.

  16. A Numerical Method for Calculating the Wave Drag of a Configuration from the Second Derivative of the Area Distribution of a Series of Equivalent Bodies of Revolution

    NASA Technical Reports Server (NTRS)

    Levy, Lionel L., Jr.; Yoshikawa, Kenneth K.

    1959-01-01

    A method based on linearized and slender-body theories, which is easily adapted to electronic-machine computing equipment, is developed for calculating the zero-lift wave drag of single- and multiple-component configurations from a knowledge of the second derivative of the area distribution of a series of equivalent bodies of revolution. The accuracy and computational time required of the method to calculate zero-lift wave drag is evaluated relative to another numerical method which employs the Tchebichef form of harmonic analysis of the area distribution of a series of equivalent bodies of revolution. The results of the evaluation indicate that the total zero-lift wave drag of a multiple-component configuration can generally be calculated most accurately as the sum of the zero-lift wave drag of each component alone plus the zero-lift interference wave drag between all pairs of components. The accuracy and computational time required of both methods to calculate total zero-lift wave drag at supersonic Mach numbers is comparable for airplane-type configurations. For systems of bodies of revolution both methods yield similar results with comparable accuracy; however, the present method only requires up to 60 percent of the computing time required of the harmonic-analysis method for two bodies of revolution and less time for a larger number of bodies.

  17. Modeling utilization distributions in space and time

    USGS Publications Warehouse

    Keating, K.A.; Cherry, S.

    2009-01-01

    W. Van Winkle defined the utilization distribution (UD) as a probability density that gives an animal's relative frequency of occurrence in a two-dimensional (x, y) plane. We extend Van Winkle's work by redefining the UD as the relative frequency distribution of an animal's occurrence in all four dimensions of space and time. We then describe a product kernel model estimation method, devising a novel kernel from the wrapped Cauchy distribution to handle circularly distributed temporal covariates, such as day of year. Using Monte Carlo simulations of animal movements in space and time, we assess estimator performance. Although not unbiased, the product kernel method yields models highly correlated (Pearson's r - 0.975) with true probabilities of occurrence and successfully captures temporal variations in density of occurrence. In an empirical example, we estimate the expected UD in three dimensions (x, y, and t) for animals belonging to each of two distinct bighorn sheep {Ovis canadensis) social groups in Glacier National Park, Montana, USA. Results show the method can yield ecologically informative models that successfully depict temporal variations in density of occurrence for a seasonally migratory species. Some implications of this new approach to UD modeling are discussed. ?? 2009 by the Ecological Society of America.

  18. A Review and Comparison of Methods for Recreating Individual Patient Data from Published Kaplan-Meier Survival Curves for Economic Evaluations: A Simulation Study

    PubMed Central

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    Background In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. Methods A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. Results All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. Conclusions The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method. PMID:25803659

  19. A Study of the Application of the Lognormal and Gamma Distributions to Corrective Maintenance Repair Time Data.

    DTIC Science & Technology

    1982-10-01

    K-S A R A j 1 10 23 R 3 8 11 16 18 For the lognormal methods the test methods sometimes give different results. The K-S test and the chi-square...significant difference among the three test methods . A previous study has been done using 24 data sets of electronic systems and equipments, using only the W...are suitable descriptors for corrective maintenance repair times, and to estimate the difference caused in assuming an exponential distribution for

  20. Distributed synchronization of networked drive-response systems: A nonlinear fixed-time protocol.

    PubMed

    Zhao, Wen; Liu, Gang; Ma, Xi; He, Bing; Dong, Yunfeng

    2017-11-01

    The distributed synchronization of networked drive-response systems is investigated in this paper. A novel nonlinear protocol is proposed to ensure that the tracking errors converge to zeros in a fixed-time. By comparison with previous synchronization methods, the present method considers more practical conditions and the synchronization time is not dependent of arbitrary initial conditions but can be offline pre-assign according to the task assignment. Finally, the feasibility and validity of the presented protocol have been illustrated by a numerical simulation. Copyright © 2017. Published by Elsevier Ltd.

  1. A review and comparison of methods for recreating individual patient data from published Kaplan-Meier survival curves for economic evaluations: a simulation study.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Li, Yuanjian

    2015-01-01

    In general, the individual patient-level data (IPD) collected in clinical trials are not available to independent researchers to conduct economic evaluations; researchers only have access to published survival curves and summary statistics. Thus, methods that use published survival curves and summary statistics to reproduce statistics for economic evaluations are essential. Four methods have been identified: two traditional methods 1) least squares method, 2) graphical method; and two recently proposed methods by 3) Hoyle and Henley, 4) Guyot et al. The four methods were first individually reviewed and subsequently assessed regarding their abilities to estimate mean survival through a simulation study. A number of different scenarios were developed that comprised combinations of various sample sizes, censoring rates and parametric survival distributions. One thousand simulated survival datasets were generated for each scenario, and all methods were applied to actual IPD. The uncertainty in the estimate of mean survival time was also captured. All methods provided accurate estimates of the mean survival time when the sample size was 500 and a Weibull distribution was used. When the sample size was 100 and the Weibull distribution was used, the Guyot et al. method was almost as accurate as the Hoyle and Henley method; however, more biases were identified in the traditional methods. When a lognormal distribution was used, the Guyot et al. method generated noticeably less bias and a more accurate uncertainty compared with the Hoyle and Henley method. The traditional methods should not be preferred because of their remarkable overestimation. When the Weibull distribution was used for a fitted model, the Guyot et al. method was almost as accurate as the Hoyle and Henley method. However, if the lognormal distribution was used, the Guyot et al. method was less biased compared with the Hoyle and Henley method.

  2. pyCTQW: A continuous-time quantum walk simulator on distributed memory computers

    NASA Astrophysics Data System (ADS)

    Izaac, Josh A.; Wang, Jingbo B.

    2015-01-01

    In the general field of quantum information and computation, quantum walks are playing an increasingly important role in constructing physical models and quantum algorithms. We have recently developed a distributed memory software package pyCTQW, with an object-oriented Python interface, that allows efficient simulation of large multi-particle CTQW (continuous-time quantum walk)-based systems. In this paper, we present an introduction to the Python and Fortran interfaces of pyCTQW, discuss various numerical methods of calculating the matrix exponential, and demonstrate the performance behavior of pyCTQW on a distributed memory cluster. In particular, the Chebyshev and Krylov-subspace methods for calculating the quantum walk propagation are provided, as well as methods for visualization and data analysis.

  3. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prasad, Mano K.; Snyderman, Neal J.; Rowland, Mark S.

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  4. Analysis and Modeling of Complex Geomorphic Systems: Technique Development, Data Collection, and Application to Rangeland Terrain

    DTIC Science & Technology

    2008-10-01

    attempts to measure the long-term distribution of stor- age time have relied unrealistic assumptions, but two recent studies suggest a new approach. As...sediment 10 age . Everitt (1968) mapped the age distribution of cottonwoods along a 34 km stretch of the Little Missouri River in North Dakota...Dietrich et al. (1982) applied Erikssons (1971) method to estimate the residence time distribution from Everitts age distribution. Somewhat mysteriously

  5. Absolute nuclear material assay using count distribution (LAMBDA) space

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-06-05

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  6. Bivariate sub-Gaussian model for stock index returns

    NASA Astrophysics Data System (ADS)

    Jabłońska-Sabuka, Matylda; Teuerle, Marek; Wyłomańska, Agnieszka

    2017-11-01

    Financial time series are commonly modeled with methods assuming data normality. However, the real distribution can be nontrivial, also not having an explicitly formulated probability density function. In this work we introduce novel parameter estimation and high-powered distribution testing methods which do not rely on closed form densities, but use the characteristic functions for comparison. The approach applied to a pair of stock index returns demonstrates that such a bivariate vector can be a sample coming from a bivariate sub-Gaussian distribution. The methods presented here can be applied to any nontrivially distributed financial data, among others.

  7. Broadband Time-Frequency Analysis Using a Multicomputer

    DTIC Science & Technology

    2004-09-30

    FFT 512 pt Waterfall WVD display 8© 2004 Mercury Computer Systems, Inc. Smoothed Pseudo Wigner - Ville Distribution One of many interference reduction...The Wigner - Ville distribution , the scalogram, and the discrete Gabor transform are among the most well-known of these methods. Due to specific...based upon FFT Accumulation Method • Continuous Wavelet Transform (Scalogram) • Discrete Wigner - Ville Distribution with a selected set of interference

  8. Investigations into distribution of lidocaine in human autopsy material.

    PubMed

    Oertel, Reinhard; Arenz, Norman; Zeitz, Sten Gunnar; Pietsch, Jörg

    2015-08-01

    With screening methods in the legal medicine drugs were often detected in autopsy material. In this study the antiarrhythmic and the local anesthetic drug lidocaine could be proved in fifty-one cases and determined in different autopsy materials. For the first time the comparison of so many distribution patterns of lidocaine in human compartments was possible. A liquid-liquid extraction procedure, a standard addition method and LC/MS/MS were used for analytics. The measured concentrations in blood were in the therapeutic range or lower. The time between lidocaine application and death was given in twenty-nine cases. These data were very helpful to estimate and interpret the distribution process of lidocaine between application and death. This time exerted a crucial influence on the distribution of lidocaine in the compartments. Most of the intravenous applicated lidocaine was found in heart blood after a very short time of distribution. Afterwards the highest concentrations were measured in brain. Later the highest concentration was found in the kidney samples or in urine. If the time between lidocaine application and death is known, the results of this study can be used to deepen the knowledge of its pharmacokinetics. If this time is unknown, the circumstances and the causes of death can be better explained. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Fast Determination of Distribution-Connected PV Impacts Using a Variable Time-Step Quasi-Static Time-Series Approach: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    The increasing deployment of distribution-connected photovoltaic (DPV) systems requires utilities to complete complex interconnection studies. Relatively simple interconnection study methods worked well for low penetrations of photovoltaic systems, but more complicated quasi-static time-series (QSTS) analysis is required to make better interconnection decisions as DPV penetration levels increase. Tools and methods must be developed to support this. This paper presents a variable-time-step solver for QSTS analysis that significantly shortens the computational time and effort to complete a detailed analysis of the operation of a distribution circuit with many DPV systems. Specifically, it demonstrates that the proposed variable-time-step solver can reduce themore » required computational time by as much as 84% without introducing any important errors to metrics, such as the highest and lowest voltage occurring on the feeder, number of voltage regulator tap operations, and total amount of losses realized in the distribution circuit during a 1-yr period. Further improvement in computational speed is possible with the introduction of only modest errors in these metrics, such as a 91 percent reduction with less than 5 percent error when predicting voltage regulator operations.« less

  10. A study of dynamic data placement for ATLAS distributed data management

    NASA Astrophysics Data System (ADS)

    Beermann, T.; Stewart, G. A.; Maettig, P.

    2015-12-01

    This contribution presents a study on the applicability and usefulness of dynamic data placement methods for data-intensive systems, such as ATLAS distributed data management (DDM). In this system the jobs are sent to the data, therefore having a good distribution of data is significant. Ways of forecasting workload patterns are examined which then are used to redistribute data to achieve a better overall utilisation of computing resources and to reduce waiting time for jobs before they can run on the grid. This method is based on a tracer infrastructure that is able to monitor and store historical data accesses and which is used to create popularity reports. These reports provide detailed summaries about data accesses in the past, including information about the accessed files, the involved users and the sites. From this past data it is possible to then make near-term forecasts for data popularity in the future. This study evaluates simple prediction methods as well as more complex methods like neural networks. Based on the outcome of the predictions a redistribution algorithm deletes unused replicas and adds new replicas for potentially popular datasets. Finally, a grid simulator is used to examine the effects of the redistribution. The simulator replays workload on different data distributions while measuring the job waiting time and site usage. The study examines how the average waiting time is affected by the amount of data that is moved, how it differs for the various forecasting methods and how that compares to the optimal data distribution.

  11. Mixed quantum/classical investigation of the photodissociation of NH3(Ã) and a practical method for maintaining zero-point energy in classical trajectories

    NASA Astrophysics Data System (ADS)

    Bonhommeau, David; Truhlar, Donald G.

    2008-07-01

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode ν2 with n2=0,…,6 quanta of vibration) in the à electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTU /SD+trajectory projection onto ZPE orbit (TRAPZ) and FSTU /SD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH2 internal energy distributions obtained for n2=0 and n2>1, as observed in experiments. Distributions obtained for n2=1 present an intermediate behavior between distributions obtained for smaller and larger n2 values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH2 internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n2=0 and n2=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  12. Mixed quantum/classical investigation of the photodissociation of NH3(A) and a practical method for maintaining zero-point energy in classical trajectories.

    PubMed

    Bonhommeau, David; Truhlar, Donald G

    2008-07-07

    The photodissociation dynamics of ammonia upon excitation of the out-of-plane bending mode (mode nu(2) with n(2)=0,[ellipsis (horizontal)],6 quanta of vibration) in the A electronic state is investigated by means of several mixed quantum/classical methods, and the calculated final-state properties are compared to experiments. Five mixed quantum/classical methods are tested: one mean-field approach (the coherent switching with decay of mixing method), two surface-hopping methods [the fewest switches with time uncertainty (FSTU) and FSTU with stochastic decay (FSTU/SD) methods], and two surface-hopping methods with zero-point energy (ZPE) maintenance [the FSTUSD+trajectory projection onto ZPE orbit (TRAPZ) and FSTUSD+minimal TRAPZ (mTRAPZ) methods]. We found a qualitative difference between final NH(2) internal energy distributions obtained for n(2)=0 and n(2)>1, as observed in experiments. Distributions obtained for n(2)=1 present an intermediate behavior between distributions obtained for smaller and larger n(2) values. The dynamics is found to be highly electronically nonadiabatic with all these methods. NH(2) internal energy distributions may have a negative energy tail when the ZPE is not maintained throughout the dynamics. The original TRAPZ method was designed to maintain ZPE in classical trajectories, but we find that it leads to unphysically high internal vibrational energies. The mTRAPZ method, which is new in this work and provides a general method for maintaining ZPE in either single-surface or multisurface trajectories, does not lead to unphysical results and is much less time consuming. The effect of maintaining ZPE in mixed quantum/classical dynamics is discussed in terms of agreement with experimental findings. The dynamics for n(2)=0 and n(2)=6 are also analyzed to reveal details not available from experiment, in particular, the time required for quenching of electronic excitation and the adiabatic energy gap and geometry at the time of quenching.

  13. Computing Wigner distributions and time correlation functions using the quantum thermal bath method: application to proton transfer spectroscopy.

    PubMed

    Basire, Marie; Borgis, Daniel; Vuilleumier, Rodolphe

    2013-08-14

    Langevin dynamics coupled to a quantum thermal bath (QTB) allows for the inclusion of vibrational quantum effects in molecular dynamics simulations at virtually no additional computer cost. We investigate here the ability of the QTB method to reproduce the quantum Wigner distribution of a variety of model potentials, designed to assess the performances and limits of the method. We further compute the infrared spectrum of a multidimensional model of proton transfer in the gas phase and in solution, using classical trajectories sampled initially from the Wigner distribution. It is shown that for this type of system involving large anharmonicities and strong nonlinear coupling to the environment, the quantum thermal bath is able to sample the Wigner distribution satisfactorily and to account for both zero point energy and tunneling effects. It leads to quantum time correlation functions having the correct short-time behavior, and the correct associated spectral frequencies, but that are slightly too overdamped. This is attributed to the classical propagation approximation rather than the generation of the quantized initial conditions themselves.

  14. Global finite-time attitude consensus tracking control for a group of rigid spacecraft

    NASA Astrophysics Data System (ADS)

    Li, Penghua

    2017-10-01

    The problem of finite-time attitude consensus for multiple rigid spacecraft with a leader-follower architecture is investigated in this paper. To achieve the finite-time attitude consensus, at the first step, a distributed finite-time convergent observer is proposed for each follower to estimate the leader's attitude in a finite time. Then based on the terminal sliding mode control method, a new finite-time attitude tracking controller is designed such that the leader's attitude can be tracked in a finite time. Finally, a finite-time observer-based distributed control strategy is proposed. It is shown that the attitude consensus can be achieved in a finite time under the proposed controller. Simulation results are given to show the effectiveness of the proposed method.

  15. Establishment of HPC(R2A) for regrowth control in non-chlorinated distribution systems.

    PubMed

    Uhl, Wolfgang; Schaule, Gabriela

    2004-05-01

    Drinking water distributed without disinfection and without regrowth problems for many years may show bacterial regrowth when the residence time and/or temperature in the distribution system increases or when substrate and/or bacterial concentration in the treated water increases. An example of a regrowth event in a major German city is discussed. Regrowth of HPC bacteria occurred unexpectedly at the end of a very hot summer. No pathogenic or potentially pathogenic bacteria were identified. Increased residence times in the distribution system and temperatures up to 25 degrees C were identified as most probable causes and the regrowth event was successfully overcome by changing flow regimes and decreasing residence times. Standard plate counts of HPC bacteria using the spread plate technique on nutrient rich agar according to German Drinking Water Regulations (GDWR) had proven to be a very good indicator of hygienically safe drinking water and to demonstrate the effectiveness of water treatment. However, the method proved insensitive for early regrowth detection. Regrowth experiments in the lab and sampling of the distribution system during two summers showed that spread plate counts on nutrient-poor R2A agar after 7-day incubation yielded 100 to 200 times higher counts. Counts on R2A after 3-day incubation were three times less than after 7 days. As the precision of plate count methods is very poor for counts less than 10 cfu/plate, a method yielding higher counts is better suited to detect upcoming regrowth than a method yielding low counts. It is shown that for the identification of regrowth events HPC(R2A) gives a further margin of about 2 weeks for reaction before HPC(GDWR). Copyright 2003 Elsevier B.V.

  16. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  17. Characterization of Fissile Assemblies Using Low-Efficiency Detection Systems

    DOE PAGES

    Chapline, George F.; Verbeke, Jerome M.

    2017-02-02

    Here, we have investigated the possibility that the amount, chemical form, multiplication, and shape of the fissile material in an assembly can be passively assayed using scintillator detection systems by only measuring the fast neutron pulse height distribution and distribution of time intervals Δt between fast neutrons. We have previously demonstrated that the alpha-ratio can be obtained from the observed pulse height distribution for fast neutrons. In this paper we report that we report that when the distribution of time intervals is plotted as a function of logΔt, the position of the correlated neutron peak is nearly independent of detectormore » efficiency and determines the internal relaxation rate for fast neutrons. If this information is combined with knowledge of the alpha-ratio, then the position of the minimum between the correlated and uncorrelated peaks can be used to rapidly estimate the mass, multiplication, and shape of fissile material. This method does not require a priori knowledge of either the efficiency for neutron detection or the alpha-ratio. Although our method neglects 3-neutron correlations, we have used previously obtained experimental data for metallic and oxide forms of Pu to demonstrate that our method yields good estimates for multiplications as large as 2, and that the only constraint on detector efficiency/observation time is that a peak in the interval time distribution due to correlated neutrons is visible.« less

  18. [An EMD based time-frequency distribution and its application in EEG analysis].

    PubMed

    Li, Xiaobing; Chu, Meng; Qiu, Tianshuang; Bao, Haiping

    2007-10-01

    Hilbert-Huang transform (HHT) is a new time-frequency analytic method to analyze the nonlinear and the non-stationary signals. The key step of this method is the empirical mode decomposition (EMD), with which any complicated signal can be decomposed into a finite and small number of intrinsic mode functions (IMF). In this paper, a new EMD based method for suppressing the cross-term of Wigner-Ville distribution (WVD) is developed and is applied to analyze the epileptic EEG signals. The simulation data and analysis results show that the new method suppresses the cross-term of the WVD effectively with an excellent resolution.

  19. An Observationally-Centred Method to Quantify the Changing Shape of Local Temperature Distributions

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    For climate sensitive decisions and adaptation planning, guidance on how local climate is changing is needed at the specific thresholds relevant to particular impacts or policy endeavours. This requires the quantification of how the distributions of variables, such as daily temperature, are changing at specific quantiles. These temperature distributions are non-normal and vary both geographically and in time. We present a method[1,2] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust changes. We have demonstrated this approach using the E-OBS gridded dataset[3] which consists of time series of local daily temperature across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. The change in temperature can be tracked at a temperature threshold, at a likelihood, or at a given return time, independently for each geographical location. Geographical correlations are thus an output of our method and reflect both climatic properties (local and synoptic), and spatial correlations inherent in the observation methodology. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. For instance, in a band from Northern France to Denmark the hottest days in the summer temperature distribution have seen changes of at least 2°C over a 43 year period; over four times the global mean change over the same period. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This approach also quantifies the level of detail at which one might wish to see agreement between climate models and observations if such models are to be used directly as tools to assess climate change impacts at local scales. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, Phil. Trans. R. Soc. A, 371 20120287. [2] D A Stainforth, S C Chapman, N W Watkins, 2013, Environ. Res. Lett. 8, 034031 [3] Haylock, M.R. et al., 2008, J. Geophys. Res (Atmospheres), 113, D20119

  20. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  1. Results of the Verification of the Statistical Distribution Model of Microseismicity Emission Characteristics

    NASA Astrophysics Data System (ADS)

    Cianciara, Aleksander

    2016-09-01

    The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.

  2. Real-time Visualization and Quantification of Retrograde Cardioplegia Delivery using Near Infrared Fluorescent Imaging

    PubMed Central

    Rangaraj, Aravind T.; Ghanta, Ravi K.; Umakanthan, Ramanan; Soltesz, Edward G.; Laurence, Rita G.; Fox, John; Cohn, Lawrence H.; Bolman, R. M.; Frangioni, John V.; Chen, Frederick Y.

    2009-01-01

    Background and Aim of the Study Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. Methods A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in 5 ex-vivo normal porcine hearts and in 5 ex-vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. Results The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed that retrograde cardioplegia primarily distributed to the left ventricle and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior left ventricle. This deficiency was compensated for with retrograde cardioplegia supplementation. Conclusions Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated. PMID:19016995

  3. [Evoked Potential Blind Extraction Based on Fractional Lower Order Spatial Time-Frequency Matrix].

    PubMed

    Long, Junbo; Wang, Haibin; Zha, Daifeng

    2015-04-01

    The impulsive electroencephalograph (EEG) noises in evoked potential (EP) signals is very strong, usually with a heavy tail and infinite variance characteristics like the acceleration noise impact, hypoxia and etc., as shown in other special tests. The noises can be described by a stable distribution model. In this paper, Wigner-Ville distribution (WVD) and pseudo Wigner-Ville distribution (PWVD) time-frequency distribution based on the fractional lower order moment are presented to be improved. We got fractional lower order WVD (FLO-WVD) and fractional lower order PWVD (FLO-PWVD) time-frequency distribution which could be suitable for a stable distribution process. We also proposed the fractional lower order spatial time-frequency distribution matrix (FLO-STFM) concept. Therefore, combining with time-frequency underdetermined blind source separation (TF-UBSS), we proposed a new fractional lower order spatial time-frequency underdetermined blind source separation (FLO-TF-UBSS) which can work in a stable distribution environment. We used the FLO-TF-UBSS algorithm to extract EPs. Simulations showed that the proposed method could effectively extract EPs in EEG noises, and the separated EPs and EEG signals based on FLO-TF-UBSS were almost the same as the original signal, but blind separation based on TF-UBSS had certain deviation. The correlation coefficient of the FLO-TF-UBSS algorithm was higher than the TF-UBSS algorithm when generalized signal-to-noise ratio (GSNR) changed from 10 dB to 30 dB and a varied from 1. 06 to 1. 94, and was approximately e- qual to 1. Hence, the proposed FLO-TF-UBSS method might be better than the TF-UBSS algorithm based on second order for extracting EP signal under an EEG noise environment.

  4. Estimation of surface area concentration of workplace incidental nanoparticles based on number and mass concentrations

    NASA Astrophysics Data System (ADS)

    Park, J. Y.; Ramachandran, G.; Raynor, P. C.; Kim, S. W.

    2011-10-01

    Surface area was estimated by three different methods using number and/or mass concentrations obtained from either two or three instruments that are commonly used in the field. The estimated surface area concentrations were compared with reference surface area concentrations (SAREF) calculated from the particle size distributions obtained from a scanning mobility particle sizer and an optical particle counter (OPC). The first estimation method (SAPSD) used particle size distribution measured by a condensation particle counter (CPC) and an OPC. The second method (SAINV1) used an inversion routine based on PM1.0, PM2.5, and number concentrations to reconstruct assumed lognormal size distributions by minimizing the difference between measurements and calculated values. The third method (SAINV2) utilized a simpler inversion method that used PM1.0 and number concentrations to construct a lognormal size distribution with an assumed value of geometric standard deviation. All estimated surface area concentrations were calculated from the reconstructed size distributions. These methods were evaluated using particle measurements obtained in a restaurant, an aluminum die-casting factory, and a diesel engine laboratory. SAPSD was 0.7-1.8 times higher and SAINV1 and SAINV2 were 2.2-8 times higher than SAREF in the restaurant and diesel engine laboratory. In the die casting facility, all estimated surface area concentrations were lower than SAREF. However, the estimated surface area concentration using all three methods had qualitatively similar exposure trends and rankings to those using SAREF within a workplace. This study suggests that surface area concentration estimation based on particle size distribution (SAPSD) is a more accurate and convenient method to estimate surface area concentrations than estimation methods using inversion routines and may be feasible to use for classifying exposure groups and identifying exposure trends.

  5. Staged fluidized bed

    DOEpatents

    Mallon, Richard G.

    1984-01-01

    Method and apparatus for narrowing the distribution of residence times of any size particle and equalizing the residence times of large and small particles in fluidized beds. Particles are moved up one fluidized column and down a second fluidized column with the relative heights selected to equalize residence times of large and small particles. Additional pairs of columns are staged to narrow the distribution of residence times and provide complete processing of the material.

  6. A Multiple Period Problem in Distributed Energy Management Systems Considering CO2 Emissions

    NASA Astrophysics Data System (ADS)

    Muroda, Yuki; Miyamoto, Toshiyuki; Mori, Kazuyuki; Kitamura, Shoichi; Yamamoto, Takaya

    Consider a special district (group) which is composed of multiple companies (agents), and where each agent responds to an energy demand and has a CO2 emission allowance imposed. A distributed energy management system (DEMS) optimizes energy consumption of a group through energy trading in the group. In this paper, we extended the energy distribution decision and optimal planning problem in DEMSs from a single period problem to a multiple periods one. The extension enabled us to consider more realistic constraints such as demand patterns, the start-up cost, and minimum running/outage times of equipment. At first, we extended the market-oriented programming (MOP) method for deciding energy distribution to the multiple periods problem. The bidding strategy of each agent is formulated by a 0-1 mixed non-linear programming problem. Secondly, we proposed decomposing the problem into a set of single period problems in order to solve it faster. In order to decompose the problem, we proposed a CO2 emission allowance distribution method, called an EP method. We confirmed that the proposed method was able to produce solutions whose group costs were close to lower-bound group costs by computational experiments. In addition, we verified that reduction in computational time was achieved without losing the quality of solutions by using the EP method.

  7. Simulation of product distribution at PT Anugrah Citra Boga by using capacitated vehicle routing problem method

    NASA Astrophysics Data System (ADS)

    Lamdjaya, T.; Jobiliong, E.

    2017-01-01

    PT Anugrah Citra Boga is a food processing industry that produces meatballs as their main product. The distribution system of the products must be considered, because it needs to be more efficient in order to reduce the shipment cost. The purpose of this research is to optimize the distribution time by simulating the distribution channels with capacitated vehicle routing problem method. Firstly, the distribution route is observed in order to calculate the average speed, time capacity and shipping costs. Then build the model using AIMMS software. A few things that are required to simulate the model are customer locations, distances, and the process time. Finally, compare the total distribution cost obtained by the simulation and the historical data. It concludes that the company can reduce the shipping cost around 4.1% or Rp 529,800 per month. By using this model, the utilization rate can be more optimal. The current value for the first vehicle is 104.6% and after the simulation it becomes 88.6%. Meanwhile, the utilization rate of the second vehicle is increase from 59.8% to 74.1%. The simulation model is able to produce the optimal shipping route with time restriction, vehicle capacity, and amount of vehicle.

  8. Adaptive Sequential Monte Carlo for Multiple Changepoint Analysis

    DOE PAGES

    Heard, Nicholas A.; Turcotte, Melissa J. M.

    2016-05-21

    Process monitoring and control requires detection of structural changes in a data stream in real time. This paper introduces an efficient sequential Monte Carlo algorithm designed for learning unknown changepoints in continuous time. The method is intuitively simple: new changepoints for the latest window of data are proposed by conditioning only on data observed since the most recent estimated changepoint, as these observations carry most of the information about the current state of the process. The proposed method shows improved performance over the current state of the art. Another advantage of the proposed algorithm is that it can be mademore » adaptive, varying the number of particles according to the apparent local complexity of the target changepoint probability distribution. This saves valuable computing time when changes in the changepoint distribution are negligible, and enables re-balancing of the importance weights of existing particles when a significant change in the target distribution is encountered. The plain and adaptive versions of the method are illustrated using the canonical continuous time changepoint problem of inferring the intensity of an inhomogeneous Poisson process, although the method is generally applicable to any changepoint problem. Performance is demonstrated using both conjugate and non-conjugate Bayesian models for the intensity. Lastly, appendices to the article are available online, illustrating the method on other models and applications.« less

  9. Fractal analysis of the short time series in a visibility graph method

    NASA Astrophysics Data System (ADS)

    Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2016-05-01

    The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.

  10. NMR relaxation in natural soils: Fast Field Cycling and T1-T2 Determination by IR-MEMS

    NASA Astrophysics Data System (ADS)

    Haber-Pohlmeier, S.; Pohlmeier, A.; Stapf, S.; van Dusschoten, D.

    2009-04-01

    Soils are natural porous media of highest importance for food production and sustainment of water resources. For these functions, prominent properties are their ability of water retainment and transport, which are mainly controlled by pore size distribution. The latter is related to NMR relaxation times of water molecules, of which the longitudinal relaxation time can be determined non-invasively by fast-field cycling relaxometry (FFC) and both are obtainable by inversion recovery - multi-echo- imaging (IR-MEMS) methods. The advantage of the FFC method is the determination of the field dependent dispersion of the spin-lattice relaxation rate, whereas MRI at high field is capable of yielding spatially resolved T1 and T2 times. Here we present results of T1- relaxation time distributions of water in three natural soils, obtained by the analysis of FFC data by means of the inverse Laplace transformation (CONTIN)1. Kaldenkirchen soil shows relatively broad bimodal distribution functions D(T1) which shift to higher relaxation rates with increasing relaxation field. These data are compared to spatially resolved T1- and T2 distributions, obtained by IR-MEMS. The distribution of T1 corresponds well to that obtained by FFC.

  11. Dead time corrections for inbeam γ-spectroscopy measurements

    NASA Astrophysics Data System (ADS)

    Boromiza, M.; Borcea, C.; Negret, A.; Olacel, A.; Suliman, G.

    2017-08-01

    Relatively high counting rates were registered in a proton inelastic scattering experiment on 16O and 28Si using HPGe detectors which was performed at the Tandem facility of IFIN-HH, Bucharest. In consequence, dead time corrections were needed in order to determine the absolute γ-production cross sections. Considering that the real counting rate follows a Poisson distribution, the dead time correction procedure is reformulated in statistical terms. The arriving time interval between the incoming events (Δt) obeys an exponential distribution with a single parameter - the average of the associated Poisson distribution. We use this mathematical connection to calculate and implement the dead time corrections for the counting rates of the mentioned experiment. Also, exploiting an idea introduced by Pommé et al., we describe a consistent method for calculating the dead time correction which completely eludes the complicated problem of measuring the dead time of a given detection system. Several comparisons are made between the corrections implemented through this method and by using standard (phenomenological) dead time models and we show how these results were used for correcting our experimental cross sections.

  12. Iterative generalized time-frequency reassignment for planetary gearbox fault diagnosis under nonstationary conditions

    NASA Astrophysics Data System (ADS)

    Chen, Xiaowang; Feng, Zhipeng

    2016-12-01

    Planetary gearboxes are widely used in many sorts of machinery, for its large transmission ratio and high load bearing capacity in a compact structure. Their fault diagnosis relies on effective identification of fault characteristic frequencies. However, in addition to the vibration complexity caused by intricate mechanical kinematics, volatile external conditions result in time-varying running speed and/or load, and therefore nonstationary vibration signals. This usually leads to time-varying complex fault characteristics, and adds difficulty to planetary gearbox fault diagnosis. Time-frequency analysis is an effective approach to extracting the frequency components and their time variation of nonstationary signals. Nevertheless, the commonly used time-frequency analysis methods suffer from poor time-frequency resolution as well as outer and inner interferences, which hinder accurate identification of time-varying fault characteristic frequencies. Although time-frequency reassignment improves the time-frequency readability, it is essentially subject to the constraints of mono-component and symmetric time-frequency distribution about true instantaneous frequency. Hence, it is still susceptible to erroneous energy reallocation or even generates pseudo interferences, particularly for multi-component signals of highly nonlinear instantaneous frequency. In this paper, to overcome the limitations of time-frequency reassignment, we propose an improvement with fine time-frequency resolution and free from interferences for highly nonstationary multi-component signals, by exploiting the merits of iterative generalized demodulation. The signal is firstly decomposed into mono-components of constant frequency by iterative generalized demodulation. Time-frequency reassignment is then applied to each generalized demodulated mono-component, obtaining a fine time-frequency distribution. Finally, the time-frequency distribution of each signal component is restored and superposed to get the time-frequency distribution of original signal. The proposed method is validated using both numerical simulated and lab experimental planetary gearbox vibration signals. The time-varying gear fault symptoms are successfully extracted, showing effectiveness of the proposed iterative generalized time-frequency reassignment method in planetary gearbox fault diagnosis under nonstationary conditions.

  13. Time evolution of a Gaussian class of quasi-distribution functions under quadratic Hamiltonian.

    PubMed

    Ginzburg, D; Mann, A

    2014-03-10

    A Lie algebraic method for propagation of the Wigner quasi-distribution function (QDF) under quadratic Hamiltonian was presented by Zoubi and Ben-Aryeh. We show that the same method can be used in order to propagate a rather general class of QDFs, which we call the "Gaussian class." This class contains as special cases the well-known Wigner, Husimi, Glauber, and Kirkwood-Rihaczek QDFs. We present some examples of the calculation of the time evolution of those functions.

  14. Distributed optimization system and method

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. Ultrasound shear wave simulation based on nonlinear wave propagation and Wigner-Ville Distribution analysis

    NASA Astrophysics Data System (ADS)

    Bidari, Pooya Sobhe; Alirezaie, Javad; Tavakkoli, Jahan

    2017-03-01

    This paper presents a method for modeling and simulation of shear wave generation from a nonlinear Acoustic Radiation Force Impulse (ARFI) that is considered as a distributed force applied at the focal region of a HIFU transducer radiating in nonlinear regime. The shear wave propagation is simulated by solving the Navier's equation from the distributed nonlinear ARFI as the source of the shear wave. Then, the Wigner-Ville Distribution (WVD) as a time-frequency analysis method is used to detect the shear wave at different local points in the region of interest. The WVD results in an estimation of the shear wave time of arrival, its mean frequency and local attenuation which can be utilized to estimate medium's shear modulus and shear viscosity using the Voigt model.

  16. Spatio-temporal precipitation climatology over complex terrain using a censored additive regression model.

    PubMed

    Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim

    2017-06-15

    Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.

  17. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data.

    PubMed

    O'Reilly, Joseph E; Donoghue, Philip C J

    2018-03-01

    Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data.

  18. The Efficacy of Consensus Tree Methods for Summarizing Phylogenetic Relationships from a Posterior Sample of Trees Estimated from Morphological Data

    PubMed Central

    O’Reilly, Joseph E; Donoghue, Philip C J

    2018-01-01

    Abstract Consensus trees are required to summarize trees obtained through MCMC sampling of a posterior distribution, providing an overview of the distribution of estimated parameters such as topology, branch lengths, and divergence times. Numerous consensus tree construction methods are available, each presenting a different interpretation of the tree sample. The rise of morphological clock and sampled-ancestor methods of divergence time estimation, in which times and topology are coestimated, has increased the popularity of the maximum clade credibility (MCC) consensus tree method. The MCC method assumes that the sampled, fully resolved topology with the highest clade credibility is an adequate summary of the most probable clades, with parameter estimates from compatible sampled trees used to obtain the marginal distributions of parameters such as clade ages and branch lengths. Using both simulated and empirical data, we demonstrate that MCC trees, and trees constructed using the similar maximum a posteriori (MAP) method, often include poorly supported and incorrect clades when summarizing diffuse posterior samples of trees. We demonstrate that the paucity of information in morphological data sets contributes to the inability of MCC and MAP trees to accurately summarise of the posterior distribution. Conversely, majority-rule consensus (MRC) trees represent a lower proportion of incorrect nodes when summarizing the same posterior samples of trees. Thus, we advocate the use of MRC trees, in place of MCC or MAP trees, in attempts to summarize the results of Bayesian phylogenetic analyses of morphological data. PMID:29106675

  19. Wigner-Hough/Radon Transform for GPS Post-Correlation Integration (Preprint)

    DTIC Science & Technology

    2007-09-01

    Wigner - Ville distribution ( WVD ) is a well known method to estimate instantaneous frequency, which appears as a...Barbarossa, 1996]. In this method, the Wigner - Ville distribution ( WVD ) is used to represent the signal energy in the time-frequency plane while the...its Wigner - Ville 4 distribution or WVD is computed as: ∫ +∞ ∞− −−+= τττ τπ detxtxftW fj 2* ) 2 () 2 (),( (4) where * stands for complex

  20. Data quantile-quantile plots: quantifying the time evolution of space climatology

    NASA Astrophysics Data System (ADS)

    Tindale, Elizabeth; Chapman, Sandra

    2017-04-01

    The solar wind is inherently variable across a wide range of spatio-temporal scales; embedded in the flow are the signatures of distinct non-linear physical processes from evolving turbulence to the dynamical solar corona. In-situ satellite observations of solar wind magnetic field and velocity are at minute and below time resolution and now extend over several solar cycles. Each solar cycle is unique, and the space climatology challenge is to quantify how solar wind variability changes within, and across, each distinct solar cycle, and how this in turn drives space weather at earth. We will demonstrate a novel statistical method, that of data-data quantile-quantile (DQQ) plots, which quantifies how the underlying statistical distribution of a given observable is changing in time. Importantly this method does not require any assumptions concerning the underlying functional form of the distribution and can identify multi-component behaviour that is changing in time. This can be used to determine when a sub-range of a given observable is undergoing a change in statistical distribution, or where the moments of the distribution only are changing and the functional form of the underlying distribution is not changing in time. The method is quite general; for this application we use data from the WIND satellite to compare the solar wind across the minima and maxima of solar cycles 23 and 24 [1], and how these changes are manifest in parameters that quantify coupling to the earth's magnetosphere. [1] Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.

  1. Comparison between wavelet transform and moving average as filter method of MODIS imagery to recognize paddy cropping pattern in West Java

    NASA Astrophysics Data System (ADS)

    Dwi Nugroho, Kreshna; Pebrianto, Singgih; Arif Fatoni, Muhammad; Fatikhunnada, Alvin; Liyantono; Setiawan, Yudi

    2017-01-01

    Information on the area and spatial distribution of paddy field are needed to support sustainable agricultural and food security program. Mapping or distribution of cropping pattern paddy field is important to obtain sustainability paddy field area. It can be done by direct observation and remote sensing method. This paper discusses remote sensing for paddy field monitoring based on MODIS time series data. In time series MODIS data, difficult to direct classified of data, because of temporal noise. Therefore wavelet transform and moving average are needed as filter methods. The Objective of this study is to recognize paddy cropping pattern with wavelet transform and moving average in West Java using MODIS imagery (MOD13Q1) from 2001 to 2015 then compared between both of methods. The result showed the spatial distribution almost have the same cropping pattern. The accuracy of wavelet transform (75.5%) is higher than moving average (70.5%). Both methods showed that the majority of the cropping pattern in West Java have pattern paddy-fallow-paddy-fallow with various time planting. The difference of the planting schedule was occurs caused by the availability of irrigation water.

  2. Robust Hydrological Forecasting for High-resolution Distributed Models Using a Unified Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Hernandez, F.; Liang, X.

    2017-12-01

    Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.

  3. Wigner-Ville distribution and Gabor transform in Doppler ultrasound signal processing.

    PubMed

    Ghofrani, S; Ayatollahi, A; Shamsollahi, M B

    2003-01-01

    Time-frequency distributions have been used extensively for nonstationary signal analysis, they describe how the frequency content of a signal is changing in time. The Wigner-Ville distribution (WVD) is the best known. The draw back of WVD is cross-term artifacts. An alternative to the WVD is Gabor transform (GT), a signal decomposition method, which displays the time-frequency energy of a signal on a joint t-f plane without generating considerable cross-terms. In this paper the WVD and GT of ultrasound echo signals are computed analytically.

  4. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2012-05-15

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  5. Absolute nuclear material assay

    DOEpatents

    Prasad, Manoj K [Pleasanton, CA; Snyderman, Neal J [Berkeley, CA; Rowland, Mark S [Alamo, CA

    2010-07-13

    A method of absolute nuclear material assay of an unknown source comprising counting neutrons from the unknown source and providing an absolute nuclear material assay utilizing a model to optimally compare to the measured count distributions. In one embodiment, the step of providing an absolute nuclear material assay comprises utilizing a random sampling of analytically computed fission chain distributions to generate a continuous time-evolving sequence of event-counts by spreading the fission chain distribution in time.

  6. Rapid detection of Naegleria fowleri in water distribution pipeline biofilms and drinking water samples.

    PubMed

    Puzon, Geoffrey J; Lancaster, James A; Wylie, Jason T; Plumb, Iason J

    2009-09-01

    Rapid detection of pathogenic Naegleria fowler in water distribution networks is critical for water utilities. Current detection methods rely on sampling drinking water followed by culturing and molecular identification of purified strains. This culture-based method takes an extended amount of time (days), detects both nonpathogenic and pathogenic species, and does not account for N. fowleri cells associated with pipe wall biofilms. In this study, a total DNA extraction technique coupled with a real-time PCR method using primers specific for N. fowleri was developed and validated. The method readily detected N. fowleri without preculturing with the lowest detection limit for N. fowleri cells spiked in biofilm being one cell (66% detection rate) and five cells (100% detection rate). For drinking water, the detection limit was five cells (66% detection rate) and 10 cells (100% detection rate). By comparison, culture-based methods were less sensitive for detection of cells spiked into both biofilm (66% detection for <10 cells) and drinking water (0% detection for <10 cells). In mixed cultures of N. fowleri and nonpathogenic Naegleria, the method identified N. fowleri in 100% of all replicates, whereastests with the current consensus primers detected N. fowleri in only 5% of all replicates. Application of the new method to drinking water and pipe wall biofilm samples obtained from a distribution network enabled the detection of N. fowleri in under 6 h, versus 3+ daysforthe culture based method. Further, comparison of the real-time PCR data from the field samples and the standard curves enabled an approximation of N. fowleri cells in the biofilm and drinking water. The use of such a method will further aid water utilities in detecting and managing the persistence of N. fowleri in water distribution networks.

  7. The current duration design for estimating the time to pregnancy distribution: a nonparametric Bayesian perspective.

    PubMed

    Gasbarra, Dario; Arjas, Elja; Vehtari, Aki; Slama, Rémy; Keiding, Niels

    2015-10-01

    This paper was inspired by the studies of Niels Keiding and co-authors on estimating the waiting time-to-pregnancy (TTP) distribution, and in particular on using the current duration design in that context. In this design, a cross-sectional sample of women is collected from those who are currently attempting to become pregnant, and then by recording from each the time she has been attempting. Our aim here is to study the identifiability and the estimation of the waiting time distribution on the basis of current duration data. The main difficulty in this stems from the fact that very short waiting times are only rarely selected into the sample of current durations, and this renders their estimation unstable. We introduce here a Bayesian method for this estimation problem, prove its asymptotic consistency, and compare the method to some variants of the non-parametric maximum likelihood estimators, which have been used previously in this context. The properties of the Bayesian estimation method are studied also empirically, using both simulated data and TTP data on current durations collected by Slama et al. (Hum Reprod 27(5):1489-1498, 2012).

  8. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    NASA Astrophysics Data System (ADS)

    Fraser, S. A.; Wood, N. J.; Johnston, D. M.; Leonard, G. S.; Greening, P. D.; Rossetto, T.

    2014-11-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.

  9. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    USGS Publications Warehouse

    Fraser, Stuart A.; Wood, Nathan J.; Johnston, David A.; Leonard, Graham S.; Greening, Paul D.; Rossetto, Tiziana

    2014-01-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diwaker, E-mail: diwakerphysics@gmail.com; Chakraborty, Aniruddha

    The Smoluchowski equation with a time-dependent sink term is solved exactly. In this method, knowing the probability distribution P(0, s) at the origin, allows deriving the probability distribution P(x, s) at all positions. Exact solutions of the Smoluchowski equation are also provided in different cases where the sink term has linear, constant, inverse, and exponential variation in time.

  11. A Technical Survey on Optimization of Processing Geo Distributed Data

    NASA Astrophysics Data System (ADS)

    Naga Malleswari, T. Y. J.; Ushasukhanya, S.; Nithyakalyani, A.; Girija, S.

    2018-04-01

    With growing cloud services and technology, there is growth in some geographically distributed data centers to store large amounts of data. Analysis of geo-distributed data is required in various services for data processing, storage of essential information, etc., processing this geo-distributed data and performing analytics on this data is a challenging task. The distributed data processing is accompanied by issues in storage, computation and communication. The key issues to be dealt with are time efficiency, cost minimization, utility maximization. This paper describes various optimization methods like end-to-end multiphase, G-MR, etc., using the techniques like Map-Reduce, CDS (Community Detection based Scheduling), ROUT, Workload-Aware Scheduling, SAGE, AMP (Ant Colony Optimization) to handle these issues. In this paper various optimization methods and techniques used are analyzed. It has been observed that end-to end multiphase achieves time efficiency; Cost minimization concentrates to achieve Quality of Service, Computation and reduction of Communication cost. SAGE achieves performance improvisation in processing geo-distributed data sets.

  12. Quantile Regression Models for Current Status Data

    PubMed Central

    Ou, Fang-Shu; Zeng, Donglin; Cai, Jianwen

    2016-01-01

    Current status data arise frequently in demography, epidemiology, and econometrics where the exact failure time cannot be determined but is only known to have occurred before or after a known observation time. We propose a quantile regression model to analyze current status data, because it does not require distributional assumptions and the coefficients can be interpreted as direct regression effects on the distribution of failure time in the original time scale. Our model assumes that the conditional quantile of failure time is a linear function of covariates. We assume conditional independence between the failure time and observation time. An M-estimator is developed for parameter estimation which is computed using the concave-convex procedure and its confidence intervals are constructed using a subsampling method. Asymptotic properties for the estimator are derived and proven using modern empirical process theory. The small sample performance of the proposed method is demonstrated via simulation studies. Finally, we apply the proposed method to analyze data from the Mayo Clinic Study of Aging. PMID:27994307

  13. An improvement of the measurement of time series irreversibility with visibility graph approach

    NASA Astrophysics Data System (ADS)

    Wu, Zhenyu; Shang, Pengjian; Xiong, Hui

    2018-07-01

    We propose a method to improve the measure of real-valued time series irreversibility which contains two tools: the directed horizontal visibility graph and the Kullback-Leibler divergence. The degree of time irreversibility is estimated by the Kullback-Leibler divergence between the in and out degree distributions presented in the associated visibility graph. In our work, we reframe the in and out degree distributions by encoding them with different embedded dimensions used in calculating permutation entropy(PE). With this improved method, we can not only estimate time series irreversibility efficiently, but also detect time series irreversibility from multiple dimensions. We verify the validity of our method and then estimate the amount of time irreversibility of series generated by chaotic maps as well as global stock markets over the period 2005-2015. The result shows that the amount of time irreversibility reaches the peak with embedded dimension d = 3 under circumstances of experiment and financial markets.

  14. Precise Hypocenter Determination around Palu Koro Fault: a Preliminary Results

    NASA Astrophysics Data System (ADS)

    Fawzy Ismullah, M. Muhammad; Nugraha, Andri Dian; Ramdhan, Mohamad; Wandono

    2017-04-01

    Sulawesi area is located in complex tectonic pattern. High seismicity activity in the middle of Sulawesi is related to Palu Koro fault (PKF). In this study, we determined precise hypocenter around PKF by applying double-difference method. We attempt to investigate of the seismicity rate, geometry of the fault and distribution of focus depth around PKF. We first re-pick P-and S-wave arrival time of the PKF events to determine the initial hypocenter location using Hypoellipse method through updated 1-D seismic velocity. Later on, we relocated the earthquake event using double-difference method. Our preliminary results show the distribution of relocated events are located around PKF and have smaller residual time than the initial location. We will enhance the hypocenter location through updating of arrival time by applying waveform cross correlation method as input for double-difference relocation.

  15. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  16. Leveraging AMI data for distribution system model calibration and situational awareness

    DOE PAGES

    Peppanen, Jouni; Reno, Matthew J.; Thakkar, Mohini; ...

    2015-01-15

    The many new distributed energy resources being installed at the distribution system level require increased visibility into system operations that will be enabled by distribution system state estimation (DSSE) and situational awareness applications. Reliable and accurate DSSE requires both robust methods for managing the big data provided by smart meters and quality distribution system models. This paper presents intelligent methods for detecting and dealing with missing or inaccurate smart meter data, as well as the ways to process the data for different applications. It also presents an efficient and flexible parameter estimation method based on the voltage drop equation andmore » regression analysis to enhance distribution system model accuracy. Finally, it presents a 3-D graphical user interface for advanced visualization of the system state and events. Moreover, we demonstrate this paper for a university distribution network with the state-of-the-art real-time and historical smart meter data infrastructure.« less

  17. Semiparametric methods to contrast gap time survival functions: Application to repeat kidney transplantation.

    PubMed

    Shu, Xu; Schaubel, Douglas E

    2016-06-01

    Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.

  18. Analytical and numerical treatment of the heat conduction equation obtained via time-fractional distributed-order heat conduction law

    NASA Astrophysics Data System (ADS)

    Želi, Velibor; Zorica, Dušan

    2018-02-01

    Generalization of the heat conduction equation is obtained by considering the system of equations consisting of the energy balance equation and fractional-order constitutive heat conduction law, assumed in the form of the distributed-order Cattaneo type. The Cauchy problem for system of energy balance equation and constitutive heat conduction law is treated analytically through Fourier and Laplace integral transform methods, as well as numerically by the method of finite differences through Adams-Bashforth and Grünwald-Letnikov schemes for approximation derivatives in temporal domain and leap frog scheme for spatial derivatives. Numerical examples, showing time evolution of temperature and heat flux spatial profiles, demonstrate applicability and good agreement of both methods in cases of multi-term and power-type distributed-order heat conduction laws.

  19. Bidirectional light-scattering image processing method for high-concentration jet sprays

    NASA Astrophysics Data System (ADS)

    Shimizu, I.; Emori, Y.; Yang, W.-J.; Shimoda, M.; Suzuki, T.

    1985-01-01

    In order to study the distributions of droplet size and volume density in high-concentration jet sprays, a new technique is developed, which combines the forward and backward light scattering method and an image processing method. A pulsed ruby laser is used as the light source. The Mie scattering theory is applied to the results obtained from image processing on the scattering photographs. The time history is obtained for the droplet size and volume density distributions, and the method is demonstrated by diesel fuel sprays under various injecting conditions. The validity of the technique is verified by a good agreement in the injected fuel volume distributions obtained by the present method and by injection rate measurements.

  20. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System.

    PubMed

    Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.

  1. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System

    PubMed Central

    Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556

  2. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  3. Intertime jump statistics of state-dependent Poisson processes.

    PubMed

    Daly, Edoardo; Porporato, Amilcare

    2007-01-01

    A method to obtain the probability distribution of the interarrival times of jump occurrences in systems driven by state-dependent Poisson noise is proposed. Such a method uses the survivor function obtained by a modified version of the master equation associated to the stochastic process under analysis. A model for the timing of human activities shows the capability of state-dependent Poisson noise to generate power-law distributions. The application of the method to a model for neuron dynamics and to a hydrological model accounting for land-atmosphere interaction elucidates the origin of characteristic recurrence intervals and possible persistence in state-dependent Poisson models.

  4. An intercomparison of methods for solving the stochastic collection equation with a focus on cloud radar Doppler spectra in drizzling stratocumulus

    NASA Astrophysics Data System (ADS)

    Lee, H.; Fridlind, A. M.; Ackerman, A. S.; Kollias, P.

    2017-12-01

    Cloud radar Doppler spectra provide rich information for evaluating the fidelity of particle size distributions from cloud models. The intrinsic simplifications of bulk microphysics schemes generally preclude the generation of plausible Doppler spectra, unlike bin microphysics schemes, which develop particle size distributions more organically at substantial computational expense. However, bin microphysics schemes face the difficulty of numerical diffusion leading to overly rapid large drop formation, particularly while solving the stochastic collection equation (SCE). Because such numerical diffusion can cause an even greater overestimation of radar reflectivity, an accurate method for solving the SCE is essential for bin microphysics schemes to accurately simulate Doppler spectra. While several methods have been proposed to solve the SCE, here we examine those of Berry and Reinhardt (1974, BR74), Jacobson et al. (1994, J94), and Bott (2000, B00). Using a simple box model to simulate drop size distribution evolution during precipitation formation with a realistic kernel, it is shown that each method yields a converged solution as the resolution of the drop size grid increases. However, the BR74 and B00 methods yield nearly identical size distributions in time, whereas the J94 method produces consistently larger drops throughout the simulation. In contrast to an earlier study, the performance of the B00 method is found to be satisfactory; it converges at relatively low resolution and long time steps, and its computational efficiency is the best among the three methods considered here. Finally, a series of idealized stratocumulus large-eddy simulations are performed using the J94 and B00 methods. The reflectivity size distributions and Doppler spectra obtained from the different SCE solution methods are presented and compared with observations.

  5. Connections between residence time distributions and watershed characteristics across the continental US

    NASA Astrophysics Data System (ADS)

    Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.

    2016-12-01

    Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.

  6. 28 CFR 48.2 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... following: Printing; time, method, and field of publication; allocation of production facilities; distribution; advertising solicitation; circulation solicitation; business department; establishment of advertising rates; establishment of circulation rates and revenue distribution: Provided, That there is no...

  7. Time-resolved ion velocity distribution in a cylindrical Hall thruster: heterodyne-based experiment and modeling.

    PubMed

    Diallo, A; Keller, S; Shi, Y; Raitses, Y; Mazouffre, S

    2015-03-01

    Time-resolved variations of the ion velocity distribution function (IVDF) are measured in the cylindrical Hall thruster using a novel heterodyne method based on the laser-induced fluorescence technique. This method consists in inducing modulations of the discharge plasma at frequencies that enable the coupling to the breathing mode. Using a harmonic decomposition of the IVDF, one can extract each harmonic component of the IVDF from which the time-resolved IVDF is reconstructed. In addition, simulations have been performed assuming a sloshing of the IVDF during the modulation that show agreement between the simulated and measured first order perturbation of the IVDF.

  8. Estimation of Staphylococcus aureus growth parameters from turbidity data: characterization of strain variation and comparison of methods.

    PubMed

    Lindqvist, R

    2006-07-01

    Turbidity methods offer possibilities for generating data required for addressing microorganism variability in risk modeling given that the results of these methods correspond to those of viable count methods. The objectives of this study were to identify the best approach for determining growth parameters based on turbidity data and use of a Bioscreen instrument and to characterize variability in growth parameters of 34 Staphylococcus aureus strains of different biotypes isolated from broiler carcasses. Growth parameters were estimated by fitting primary growth models to turbidity growth curves or to detection times of serially diluted cultures either directly or by using an analysis of variance (ANOVA) approach. The maximum specific growth rates in chicken broth at 17 degrees C estimated by time to detection methods were in good agreement with viable count estimates, whereas growth models (exponential and Richards) underestimated growth rates. Time to detection methods were selected for strain characterization. The variation of growth parameters among strains was best described by either the logistic or lognormal distribution, but definitive conclusions require a larger data set. The distribution of the physiological state parameter ranged from 0.01 to 0.92 and was not significantly different from a normal distribution. Strain variability was important, and the coefficient of variation of growth parameters was up to six times larger among strains than within strains. It is suggested to apply a time to detection (ANOVA) approach using turbidity measurements for convenient and accurate estimation of growth parameters. The results emphasize the need to consider implications of strain variability for predictive modeling and risk assessment.

  9. Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.

    PubMed

    Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué

    2018-02-15

    We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Tomographic reconstruction of storm time RC ion distribution from ENA images on board multiple spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Shu-Ying; Yan, Wei-Nan; Xu, Liang

    2015-11-01

    A quantitative retrieval of 3-D distribution of energetic ions as energetic neutral atoms (ENA) sources is a challenging task. In this paper the voxel computerized tomography (CT) method is initially applied to reconstruct the 3-D distribution of energetic ions in the magnetospheric ring current (RC) region from ENA emission images on board multiple spacecraft. To weaken the influence of low-altitude emission (LAE) on the reconstruction, the LAE-associated ENA intensities are corrected by invoking the thick-target approximation. To overcome the divergence in iteration due to discordant instrument biases, a differential ENA voxel CT method is developed. The method is proved reliable and advantageous by numerical simulation for the case of constant bias independent of viewing angle. Then this method is implemented with ENA data measured by the Two Wide-angle Imaging Neutral-atom Spectrometers mission which performs stereoscopic ENA imaging. The 3-D spatial distributions and energy spectra of RC ion flux intensity are reconstructed for energies of 4-50 keV during the main phase of a major magnetic storm. The retrieved ion flux distributions seem to correspond to an asymmetric partial RC, located mainly around midnight favoring the postmidnight with L = 3.5-7.0 in the equatorial plane. The RC ion distributions with magnetic local time depend on energy, with major equatorial flux peak for lower energy located east of that for higher energy. In comparison with the ion energy spectra measured by Time History of Events and Macroscale Interactions during Substorms-D satellite flying in the RC region, the retrieved spectrum from remotely sensed ENA images are well matched with the in situ measurements.

  11. Using New Theory and Experimental Methods to Understand the Relative Controls of Storage, Antecedent Conditions and Precipitation Intensity on Transit Time Distributions through a Sloping Soil Lysimeter

    NASA Astrophysics Data System (ADS)

    Kim, M.; Pangle, L. A.; Cardoso, C.; Lora, M.; Wang, Y.; Harman, C. J.; Troch, P. A. A.

    2014-12-01

    Transit time distributions (TTD) are an efficient way of characterizing transport through the complex flow dynamics of a hydrologic system, and can serve as a basis for spatially-integrated solute transport modeling. Recently there has been progress in the development of a theory of time-variable TTDs that captures the effect of temporal variability in the timing of fluxes as well as changes in flow pathways. Furthermore, a new formulation of this theory allows the essential transport properties of a system to be parameterized by a physically meaningful time-variable probability distribution, the Ω function. This distribution determines how the age distribution of water in storage is sampled by the outflow. The form of the Ω function varies if the flow pathways change, but is not determined by the timing of fluxes (unlike the TTD). In this study, we use this theory to characterize transport by transient flows through a homogeneously packed 1 m3 sloping soil lysimeter. The transit time distribution associated with each of four irrigation periods (repeated daily for 24 days) are compared to examine the significance of changes in the Ω function due to variations in total storage, antecedent conditions, and precipitation intensity. We observe both the time-variable TTD and the Ω function experimentally by applying the PERTH method (Harman and Kim, 2014, GRL, 41, 1567-1575). The method allows us to observe multiple overlapping time-variable TTD in controlled experiments using only two conservative tracers. We hypothesize that both the TTD and the Ω function will vary in time, even in this small scale, because water will take different flow pathways depending on the initial state of the lysimeter and irrigation intensity. However, based on primarily modeling, we conjecture that major variability in the Ω function will be limited to a period during and immediately after each irrigation. We anticipate the Ω function is almost time-invariant (or scales simply with total storage) during the recession period because flow pathways are stable during this period. This is one of the first experimental studies of this type, and the results offer insights into solute transport in transient, variably-saturated systems.

  12. Cooperative Optimal Coordination for Distributed Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di; Ren, Wei

    In this paper, we consider the optimal coordination problem for distributed energy resources (DERs) including distributed generators and energy storage devices. We propose an algorithm based on the push-sum and gradient method to optimally coordinate storage devices and distributed generators in a distributed manner. In the proposed algorithm, each DER only maintains a set of variables and updates them through information exchange with a few neighbors over a time-varying directed communication network. We show that the proposed distributed algorithm solves the optimal DER coordination problem if the time-varying directed communication network is uniformly jointly strongly connected, which is a mildmore » condition on the connectivity of communication topologies. The proposed distributed algorithm is illustrated and validated by numerical simulations.« less

  13. The Effect on Non-Normal Distributions on the Integrated Moving Average Model of Time-Series Analysis.

    ERIC Educational Resources Information Center

    Doerann-George, Judith

    The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…

  14. Conditional optimal spacing in exponential distribution.

    PubMed

    Park, Sangun

    2006-12-01

    In this paper, we propose the conditional optimal spacing defined as the optimal spacing after specifying a predetermined order statistic. If we specify a censoring time, then the optimal inspection times for grouped inspection can be determined from this conditional optimal spacing. We take an example of exponential distribution, and provide a simple method of finding the conditional optimal spacing.

  15. Comparison of the effectiveness of some common animal data scaling techniques in estimating human radiation dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sparks, R.B.; Aydogan, B.

    In the development of new radiopharmaceuticals, animal studies are typically performed to get a first approximation of the expected radiation dose in humans. This study evaluates the performance of some commonly used data extrapolation techniques to predict residence times in humans using data collected from animals. Residence times were calculated using animal and human data, and distributions of ratios of the animal results to human results were constructed for each extrapolation method. Four methods using animal data to predict human residence times were examined: (1) using no extrapolation, (2) using relative organ mass extrapolation, (3) using physiological time extrapolation, andmore » (4) using a combination of the mass and time methods. The residence time ratios were found to be log normally distributed for the nonextrapolated and extrapolated data sets. The use of relative organ mass extrapolation yielded no statistically significant change in the geometric mean or variance of the residence time ratios as compared to using no extrapolation. Physiologic time extrapolation yielded a statistically significant improvement (p < 0.01, paired t test) in the geometric mean of the residence time ratio from 0.5 to 0.8. Combining mass and time methods did not significantly improve the results of using time extrapolation alone. 63 refs., 4 figs., 3 tabs.« less

  16. Fatigue assessment of vibrating rail vehicle bogie components under non-Gaussian random excitations using power spectral densities

    NASA Astrophysics Data System (ADS)

    Wolfsteiner, Peter; Breuer, Werner

    2013-10-01

    The assessment of fatigue load under random vibrations is usually based on load spectra. Typically they are computed with counting methods (e.g. Rainflow) based on a time domain signal. Alternatively methods are available (e.g. Dirlik) enabling the estimation of load spectra directly from power spectral densities (PSDs) of the corresponding time signals; the knowledge of the time signal is then not necessary. These PSD based methods have the enormous advantage that if for example the signal to assess results from a finite element method based vibration analysis, the computation time of the simulation of PSDs in the frequency domain outmatches by far the simulation of time signals in the time domain. This is especially true for random vibrations with very long signals in the time domain. The disadvantage of the PSD based simulation of vibrations and also the PSD based load spectra estimation is their limitation to Gaussian distributed time signals. Deviations from this Gaussian distribution cause relevant deviations in the estimated load spectra. In these cases usually only computation time intensive time domain calculations produce accurate results. This paper presents a method dealing with non-Gaussian signals with real statistical properties that is still able to use the efficient PSD approach with its computation time advantages. Essentially it is based on a decomposition of the non-Gaussian signal in Gaussian distributed parts. The PSDs of these rearranged signals are then used to perform usual PSD analyses. In particular, detailed methods are described for the decomposition of time signals and the derivation of PSDs and cross power spectral densities (CPSDs) from multiple real measurements without using inaccurate standard procedures. Furthermore the basic intention is to design a general and integrated method that is not just able to analyse a certain single load case for a small time interval, but to generate representative PSD and CPSD spectra replacing extensive measured loads in time domain without losing the necessary accuracy for the fatigue load results. These long measurements may even represent the whole application range of the railway vehicle. The presented work demonstrates the application of this method to railway vehicle components subjected to random vibrations caused by the wheel rail contact. Extensive measurements of axle box accelerations have been used to verify the proposed procedure for this class of railway vehicle applications. The linearity is not a real limitation, because the structural vibrations caused by the random excitations are usually small for rail vehicle applications. The impact of nonlinearities is usually covered by separate nonlinear models and only needed for the deterministic part of the loads. Linear vibration systems subjected to Gaussian vibrations respond with vibrations having also a Gaussian distribution. A non-Gaussian distribution in the excitation signal produces also a non-Gaussian response with statistical properties different from these excitations. A drawback is the fact that there is no simple mathematical relation between excitation and response concerning these deviations from the Gaussian distribution (see e.g. Ito calculus [6], which is usually not part of commercial codes!). There are a couple of well-established procedures for the prediction of fatigue load spectra from PSDs designed for Gaussian loads (see [4]); the question of the impact of non-Gaussian distributions on the fatigue load prediction has been studied for decades (see e.g. [3,4,11-13]) and is still subject of the ongoing research; e.g. [13] proposed a procedure, capable of considering non-Gaussian broadbanded loads. It is based on the knowledge of the response PSD and some statistical data, defining the non-Gaussian character of the underlying time signal. As already described above, these statistical data are usually not available for a PSD vibration response that has been calculated in the frequency domain. Summarizing the above and considering the fact of having highly non-Gaussian excitations on railway vehicles caused by the wheel rail contact means that the fast PSD analysis in the frequency domain cannot be combined with load spectra prediction methods for PSDs.

  17. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrie, Michael; Shadwick, B. A.

    2016-01-04

    Here, we present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Juttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviors that do not exist in the non relativistic case.more » The numerical study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  18. A time-implicit numerical method and benchmarks for the relativistic Vlasov–Ampere equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carrié, Michael, E-mail: mcarrie2@unl.edu; Shadwick, B. A., E-mail: shadwick@mailaps.org

    2016-01-15

    We present a time-implicit numerical method to solve the relativistic Vlasov–Ampere system of equations on a two dimensional phase space grid. The time-splitting algorithm we use allows the generalization of the work presented here to higher dimensions keeping the linear aspect of the resulting discrete set of equations. The implicit method is benchmarked against linear theory results for the relativistic Landau damping for which analytical expressions using the Maxwell-Jüttner distribution function are derived. We note that, independently from the shape of the distribution function, the relativistic treatment features collective behaviours that do not exist in the nonrelativistic case. The numericalmore » study of the relativistic two-stream instability completes the set of benchmarking tests.« less

  19. An estimation of distribution method for infrared target detection based on Copulas

    NASA Astrophysics Data System (ADS)

    Wang, Shuo; Zhang, Yiqun

    2015-10-01

    Track-before-detect (TBD) based target detection involves a hypothesis test of merit functions which measure each track as a possible target track. Its accuracy depends on the precision of the distribution of merit functions, which determines the threshold for a test. Generally, merit functions are regarded Gaussian, and on this basis the distribution is estimated, which is true for most methods such as the multiple hypothesis tracking (MHT). However, merit functions for some other methods such as the dynamic programming algorithm (DPA) are non-Guassian and cross-correlated. Since existing methods cannot reasonably measure the correlation, the exact distribution can hardly be estimated. If merit functions are assumed Guassian and independent, the error between an actual distribution and its approximation may occasionally over 30 percent, and is divergent by propagation. Hence, in this paper, we propose a novel estimation of distribution method based on Copulas, by which the distribution can be estimated precisely, where the error is less than 1 percent without propagation. Moreover, the estimation merely depends on the form of merit functions and the structure of a tracking algorithm, and is invariant to measurements. Thus, the distribution can be estimated in advance, greatly reducing the demand for real-time calculation of distribution functions.

  20. Method and system for spatial data input, manipulation and distribution via an adaptive wireless transceiver

    NASA Technical Reports Server (NTRS)

    Wang, Ray (Inventor)

    2009-01-01

    A method and system for spatial data manipulation input and distribution via an adaptive wireless transceiver. The method and system include a wireless transceiver for automatically and adaptively controlling wireless transmissions using a Waveform-DNA method. The wireless transceiver can operate simultaneously over both the short and long distances. The wireless transceiver is automatically adaptive and wireless devices can send and receive wireless digital and analog data from various sources rapidly in real-time via available networks and network services.

  1. A novel method for 3D measurement of RFID multi-tag network based on matching vision and wavelet

    NASA Astrophysics Data System (ADS)

    Zhuang, Xiao; Yu, Xiaolei; Zhao, Zhimin; Wang, Donghua; Zhang, Wenjie; Liu, Zhenlu; Lu, Dongsheng; Dong, Dingbang

    2018-07-01

    In the field of radio frequency identification (RFID), the three-dimensional (3D) distribution of RFID multi-tag networks has a significant impact on their reading performance. At the same time, in order to realize the anti-collision of RFID multi-tag networks in practical engineering applications, the 3D distribution of RFID multi-tag networks must be measured. In this paper, a novel method for the 3D measurement of RFID multi-tag networks is proposed. A dual-CCD system (vertical and horizontal cameras) is used to obtain images of RFID multi-tag networks from different angles. Then, the wavelet threshold denoising method is used to remove noise in the obtained images. The template matching method is used to determine the two-dimensional coordinates and vertical coordinate of each tag. The 3D coordinates of each tag are obtained subsequently. Finally, a model of the nonlinear relation between the 3D coordinate distribution of the RFID multi-tag network and the corresponding reading distance is established using the wavelet neural network. The experiment results show that the average prediction relative error is 0.71% and the time cost is 2.17 s. The values of the average prediction relative error and time cost are smaller than those of the particle swarm optimization neural network and genetic algorithm–back propagation neural network. The time cost of the wavelet neural network is about 1% of that of the other two methods. The method proposed in this paper has a smaller relative error. The proposed method can improve the real-time performance of RFID multi-tag networks and the overall dynamic performance of multi-tag networks.

  2. Comparing the structure of an emerging market with a mature one under global perturbation

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Jafari, G. R.; Raei, R.

    2011-09-01

    In this paper we investigate the Tehran stock exchange (TSE) and Dow Jones Industrial Average (DJIA) in terms of perturbed correlation matrices. To perturb a stock market, there are two methods, namely local and global perturbation. In the local method, we replace a correlation coefficient of the cross-correlation matrix with one calculated from two Gaussian-distributed time series, whereas in the global method, we reconstruct the correlation matrix after replacing the original return series with Gaussian-distributed time series. The local perturbation is just a technical study. We analyze these markets through two statistical approaches, random matrix theory (RMT) and the correlation coefficient distribution. By using RMT, we find that the largest eigenvalue is an influence that is common to all stocks and this eigenvalue has a peak during financial shocks. We find there are a few correlated stocks that make the essential robustness of the stock market but we see that by replacing these return time series with Gaussian-distributed time series, the mean values of correlation coefficients, the largest eigenvalues of the stock markets and the fraction of eigenvalues that deviate from the RMT prediction fall sharply in both markets. By comparing these two markets, we can see that the DJIA is more sensitive to global perturbations. These findings are crucial for risk management and portfolio selection.

  3. An alternative method for determining particle-size distribution of forest road aggregate and soil with large-sized particles

    Treesearch

    Hakjun Rhee; Randy B. Foltz; James L. Fridley; Finn Krogstad; Deborah S. Page-Dumroese

    2014-01-01

    Measurement of particle-size distribution (PSD) of soil with large-sized particles (e.g., 25.4 mm diameter) requires a large sample and numerous particle-size analyses (PSAs). A new method is needed that would reduce time, effort, and cost for PSAs of the soil and aggregate material with large-sized particles. We evaluated a nested method for sampling and PSA by...

  4. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  5. Time Series Remote Sensing in Monitoring the Spatio-Temporal Dynamics of Plant Invasions: A Study of Invasive Saltcedar (Tamarix Spp.)

    NASA Astrophysics Data System (ADS)

    Diao, Chunyuan

    In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.

  6. A Distributed Algorithm for Economic Dispatch Over Time-Varying Directed Networks With Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Lu, Jie; Wu, Di

    In power system operation, economic dispatch problem (EDP) is designed to minimize the total generation cost while meeting the demand and satisfying generator capacity limits. This paper proposes an algorithm based on the gradient-push method to solve the EDP in a distributed manner over communication networks potentially with time-varying topologies and communication delays. It has been shown that the proposed method is guaranteed to solve the EDP if the time-varying directed communication network is uniformly jointly strongly connected. Moreover, the proposed algorithm is also able to handle arbitrarily large but bounded time delays on communication links. Numerical simulations are usedmore » to illustrate and validate the proposed algorithm.« less

  7. Radar modulation classification using time-frequency representation and nonlinear regression

    NASA Astrophysics Data System (ADS)

    De Luigi, Christophe; Arques, Pierre-Yves; Lopez, Jean-Marc; Moreau, Eric

    1999-09-01

    In naval electronic environment, pulses emitted by radars are collected by ESM receivers. For most of them the intrapulse signal is modulated by a particular law. To help the classical identification process, a classification and estimation of this modulation law is applied on the intrapulse signal measurements. To estimate with a good accuracy the time-varying frequency of a signal corrupted by an additive noise, one method has been chosen. This method consists on the Wigner distribution calculation, the instantaneous frequency is then estimated by the peak location of the distribution. Bias and variance of the estimator are performed by computed simulations. In a estimated sequence of frequencies, we assume the presence of false and good estimated ones, the hypothesis of Gaussian distribution is made on the errors. A robust non linear regression method, based on the Levenberg-Marquardt algorithm, is thus applied on these estimated frequencies using a Maximum Likelihood Estimator. The performances of the method are tested by using varied modulation laws and different signal to noise ratios.

  8. Battery charging control methods, electric vehicle charging methods, battery charging apparatuses and rechargeable battery systems

    DOEpatents

    Tuffner, Francis K [Richland, WA; Kintner-Meyer, Michael C. W. [Richland, WA; Hammerstrom, Donald J [West Richland, WA; Pratt, Richard M [Richland, WA

    2012-05-22

    Battery charging control methods, electric vehicle charging methods, battery charging apparatuses and rechargeable battery systems. According to one aspect, a battery charging control method includes accessing information regarding a presence of at least one of a surplus and a deficiency of electrical energy upon an electrical power distribution system at a plurality of different moments in time, and using the information, controlling an adjustment of an amount of the electrical energy provided from the electrical power distribution system to a rechargeable battery to charge the rechargeable battery.

  9. Solving the transient water age distribution problem in environmental flow systems

    NASA Astrophysics Data System (ADS)

    Cornaton, F. J.

    2011-12-01

    The temporal evolution of groundwater age and its frequency distributions can display important changes as flow regimes vary due to the natural change in climate and hydrologic conditions and/or to human induced pressures on the resource to satisfy the water demand. Groundwater age being nowadays frequently used to investigate reservoir properties and recharge conditions, special attention needs to be put on the way this property is characterized, would it be using isotopic methods, multiple tracer techniques, or mathematical modelling. Steady-state age frequency distributions can be modelled using standard numerical techniques, since the general balance equation describing age transport under steady-state flow conditions is exactly equivalent to a standard advection-dispersion equation. The time-dependent problem is however described by an extended transport operator that incorporates an additional coordinate for water age. The consequence is that numerical solutions can hardly be achieved, especially for real 3-D applications over large time periods of interest. The absence of any robust method has thus left us in the quantitative hydrogeology community dodging the issue of transience. Novel algorithms for solving the age distribution problem under time-varying flow regimes are presented and, for some specific configurations, extended to the problem of generalized component exposure time. The solution strategy is based on the combination of the Laplace Transform technique applied to the age (or exposure time) coordinate with standard time-marching schemes. The method is well-suited for groundwater problems with possible density-dependency of fluid flow (e.g. coupled flow and heat/salt concentration problems), but also presents significance to the homogeneous flow (compressible case) problem. The approach is validated using 1-D analytical solutions and exercised on some demonstration problems that are relevant to topical issues in groundwater age, including analysis of transfer times in the vadose zone, aquifer-aquitard interactions and the induction of transient age distributions when a well pump is started.

  10. Regularity of a renewal process estimated from binary data.

    PubMed

    Rice, John D; Strawderman, Robert L; Johnson, Brent A

    2017-10-09

    Assessment of the regularity of a sequence of events over time is important for clinical decision-making as well as informing public health policy. Our motivating example involves determining the effect of an intervention on the regularity of HIV self-testing behavior among high-risk individuals when exact self-testing times are not recorded. Assuming that these unobserved testing times follow a renewal process, the goals of this work are to develop suitable methods for estimating its distributional parameters when only the presence or absence of at least one event per subject in each of several observation windows is recorded. We propose two approaches to estimation and inference: a likelihood-based discrete survival model using only time to first event; and a potentially more efficient quasi-likelihood approach based on the forward recurrence time distribution using all available data. Regularity is quantified and estimated by the coefficient of variation (CV) of the interevent time distribution. Focusing on the gamma renewal process, where the shape parameter of the corresponding interevent time distribution has a monotone relationship with its CV, we conduct simulation studies to evaluate the performance of the proposed methods. We then apply them to our motivating example, concluding that the use of text message reminders significantly improves the regularity of self-testing, but not its frequency. A discussion on interesting directions for further research is provided. © 2017, The International Biometric Society.

  11. On the efficacy of procedures to normalize Ex-Gaussian distributions.

    PubMed

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2014-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results.

  12. Online Optimization Method for Operation of Generators in a Micro Grid

    NASA Astrophysics Data System (ADS)

    Hayashi, Yasuhiro; Miyamoto, Hideki; Matsuki, Junya; Iizuka, Toshio; Azuma, Hitoshi

    Recently a lot of studies and developments about distributed generator such as photovoltaic generation system, wind turbine generation system and fuel cell have been performed under the background of the global environment issues and deregulation of the electricity market, and the technique of these distributed generators have progressed. Especially, micro grid which consists of several distributed generators, loads and storage battery is expected as one of the new operation system of distributed generator. However, since precipitous load fluctuation occurs in micro grid for the reason of its smaller capacity compared with conventional power system, high-accuracy load forecasting and control scheme to balance of supply and demand are needed. Namely, it is necessary to improve the precision of operation in micro grid by observing load fluctuation and correcting start-stop schedule and output of generators online. But it is not easy to determine the operation schedule of each generator in short time, because the problem to determine start-up, shut-down and output of each generator in micro grid is a mixed integer programming problem. In this paper, the authors propose an online optimization method for the optimal operation schedule of generators in micro grid. The proposed method is based on enumeration method and particle swarm optimization (PSO). In the proposed method, after picking up all unit commitment patterns of each generators satisfied with minimum up time and minimum down time constraint by using enumeration method, optimal schedule and output of generators are determined under the other operational constraints by using PSO. Numerical simulation is carried out for a micro grid model with five generators and photovoltaic generation system in order to examine the validity of the proposed method.

  13. Inverse Planning Approach for 3-D MRI-Based Pulse-Dose Rate Intracavitary Brachytherapy in Cervix Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chajon, Enrique; Dumas, Isabelle; Touleimat, Mahmoud B.Sc.

    2007-11-01

    Purpose: The purpose of this study was to evaluate the inverse planning simulated annealing (IPSA) software for the optimization of dose distribution in patients with cervix carcinoma treated with MRI-based pulsed-dose rate intracavitary brachytherapy. Methods and Materials: Thirty patients treated with a technique using a customized vaginal mold were selected. Dose-volume parameters obtained using the IPSA method were compared with the classic manual optimization method (MOM). Target volumes and organs at risk were delineated according to the Gynecological Brachytherapy Group/European Society for Therapeutic Radiology and Oncology recommendations. Because the pulsed dose rate program was based on clinical experience with lowmore » dose rate, dwell time values were required to be as homogeneous as possible. To achieve this goal, different modifications of the IPSA program were applied. Results: The first dose distribution calculated by the IPSA algorithm proposed a heterogeneous distribution of dwell time positions. The mean D90, D100, and V100 calculated with both methods did not differ significantly when the constraints were applied. For the bladder, doses calculated at the ICRU reference point derived from the MOM differed significantly from the doses calculated by the IPSA method (mean, 58.4 vs. 55 Gy respectively; p = 0.0001). For the rectum, the doses calculated at the ICRU reference point were also significantly lower with the IPSA method. Conclusions: The inverse planning method provided fast and automatic solutions for the optimization of dose distribution. However, the straightforward use of IPSA generated significant heterogeneity in dwell time values. Caution is therefore recommended in the use of inverse optimization tools with clinical relevance study of new dosimetric rules.« less

  14. On the statistical properties of viral misinformation in online social media

    NASA Astrophysics Data System (ADS)

    Bessi, Alessandro

    2017-03-01

    The massive diffusion of online social media allows for the rapid and uncontrolled spreading of conspiracy theories, hoaxes, unsubstantiated claims, and false news. Such an impressive amount of misinformation can influence policy preferences and encourage behaviors strongly divergent from recommended practices. In this paper, we study the statistical properties of viral misinformation in online social media. By means of methods belonging to Extreme Value Theory, we show that the number of extremely viral posts over time follows a homogeneous Poisson process, and that the interarrival times between such posts are independent and identically distributed, following an exponential distribution. Moreover, we characterize the uncertainty around the rate parameter of the Poisson process through Bayesian methods. Finally, we are able to derive the predictive posterior probability distribution of the number of posts exceeding a certain threshold of shares over a finite interval of time.

  15. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  16. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior.

    PubMed

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-01-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  17. Dynamic phase differences based on quantitative phase imaging for the objective evaluation of cell behavior

    NASA Astrophysics Data System (ADS)

    Krizova, Aneta; Collakova, Jana; Dostal, Zbynek; Kvasnica, Lukas; Uhlirova, Hana; Zikmund, Tomas; Vesely, Pavel; Chmelik, Radim

    2015-11-01

    Quantitative phase imaging (QPI) brought innovation to noninvasive observation of live cell dynamics seen as cell behavior. Unlike the Zernike phase contrast or differential interference contrast, QPI provides quantitative information about cell dry mass distribution. We used such data for objective evaluation of live cell behavioral dynamics by the advanced method of dynamic phase differences (DPDs). The DPDs method is considered a rational instrument offered by QPI. By subtracting the antecedent from the subsequent image in a time-lapse series, only the changes in mass distribution in the cell are detected. The result is either visualized as a two-dimensional color-coded projection of these two states of the cell or as a time dependence of changes quantified in picograms. Then in a series of time-lapse recordings, the chain of cell mass distribution changes that would otherwise escape attention is revealed. Consequently, new salient features of live cell behavior should emerge. Construction of the DPDs method and results exhibiting the approach are presented. Advantage of the DPDs application is demonstrated on cells exposed to an osmotic challenge. For time-lapse acquisition of quantitative phase images, the recently developed coherence-controlled holographic microscope was employed.

  18. Restraint Method of Voltage Total Harmonic Distortion in Distribution Network by Power Conditioner Systems using Measured Data from IT Switches

    NASA Astrophysics Data System (ADS)

    Kawasaki, Shoji; Shimoda, Kazuki; Tanaka, Motohiro; Taoka, Hisao; Matsuki, Junya; Hayashi, Yasuhiro

    Recently, the amount of distributed generation (DG) such as photovoltaic system and wind power generator system installed in a distribution system has been increasing because of reduction of the effects on the environment. However, the harmonic troubles in the distribution system are apprehended in the background of the increase of connection of DGs through the inverters and the spread of power electronics equipment. In this paper, the authors propose a restraint method of voltage total harmonic distortion (THD) in a whole distribution network by active filter (AF) operation of plural power conditioner systems (PCS). Moreover, the authors propose a determination method of the optimal gain of AF operation so as to minimize the maximum value of voltage THD in the distribution network by the real-time feedback control with measured data from the information technology (IT) switches. In order to verify the validity of the proposed method, the numerical calculations are carried out by using an analytical model of distribution network interconnected DGs with PCS.

  19. An improved approximate-Bayesian model-choice method for estimating shared evolutionary history

    PubMed Central

    2014-01-01

    Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937

  20. Automated segmentation of linear time-frequency representations of marine-mammal sounds.

    PubMed

    Dadouchi, Florian; Gervaise, Cedric; Ioana, Cornel; Huillery, Julien; Mars, Jérôme I

    2013-09-01

    Many marine mammals produce highly nonlinear frequency modulations. Determining the time-frequency support of these sounds offers various applications, which include recognition, localization, and density estimation. This study introduces a low parameterized automated spectrogram segmentation method that is based on a theoretical probabilistic framework. In the first step, the background noise in the spectrogram is fitted with a Chi-squared distribution and thresholded using a Neyman-Pearson approach. In the second step, the number of false detections in time-frequency regions is modeled as a binomial distribution, and then through a Neyman-Pearson strategy, the time-frequency bins are gathered into regions of interest. The proposed method is validated on real data of large sequences of whistles from common dolphins, collected in the Bay of Biscay (France). The proposed method is also compared with two alternative approaches: the first is smoothing and thresholding of the spectrogram; the second is thresholding of the spectrogram followed by the use of morphological operators to gather the time-frequency bins and to remove false positives. This method is shown to increase the probability of detection for the same probability of false alarms.

  1. Autonomous Decentralized Voltage Profile Control of Super Distributed Energy System using Multi-agent Technology

    NASA Astrophysics Data System (ADS)

    Tsuji, Takao; Hara, Ryoichi; Oyama, Tsutomu; Yasuda, Keiichiro

    A super distributed energy system is a future energy system in which the large part of its demand is fed by a huge number of distributed generators. At one time some nodes in the super distributed energy system behave as load, however, at other times they behave as generator - the characteristic of each node depends on the customers' decision. In such situation, it is very difficult to regulate voltage profile over the system due to the complexity of power flows. This paper proposes a novel control method of distributed generators that can achieve the autonomous decentralized voltage profile regulation by using multi-agent technology. The proposed multi-agent system employs two types of agent; a control agent and a mobile agent. Control agents generate or consume reactive power to regulate the voltage profile of neighboring nodes and mobile agents transmit the information necessary for VQ-control among the control agents. The proposed control method is tested through numerical simulations.

  2. Are anesthesia start and end times randomly distributed? The influence of electronic records.

    PubMed

    Deal, Litisha G; Nyland, Michael E; Gravenstein, Nikolaus; Tighe, Patrick

    2014-06-01

    To perform a frequency analysis of start minute digits (SMD) and end minute digits (EMD) taken from the electronic, computer-assisted, and manual anesthesia billing-record systems. Retrospective cross-sectional review. University medical center. This cross-sectional review was conducted on billing records from a single healthcare institution over a 15-month period. A total of 30,738 cases were analyzed. For each record, the start time and end time were recorded. Distributions of SMD and EMD were tested against the null hypothesis of a frequency distribution equivalently spread between zero and nine. SMD and EMD aggregate distributions each differed from equivalency (P < 0.0001). When stratified by type of anesthetic record, no differences were found between the recorded and expected equivalent distribution patterns for electronic anesthesia records for start minute (P < 0.98) or end minute (P < 0.55). Manual and computer-assisted records maintained nonequivalent distribution patterns for SMD and EMD (P < 0.0001 for each comparison). Comparison of cumulative distributions between SMD and EMD distributions suggested a significant difference between the two patterns (P < 0.0001). An electronic anesthesia record system, with automated time capture of events verified by the user, produces a more unified distribution of billing times than do more traditional methods of entering billing times. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Finite-time hybrid projective synchronization of the drive-response complex networks with distributed-delay via adaptive intermittent control

    NASA Astrophysics Data System (ADS)

    Cheng, Lin; Yang, Yongqing; Li, Li; Sui, Xin

    2018-06-01

    This paper studies the finite-time hybrid projective synchronization of the drive-response complex networks. In the model, general transmission delays and distributed delays are also considered. By designing the adaptive intermittent controllers, the response network can achieve hybrid projective synchronization with the drive system in finite time. Based on finite-time stability theory and several differential inequalities, some simple finite-time hybrid projective synchronization criteria are derived. Two numerical examples are given to illustrate the effectiveness of the proposed method.

  4. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  5. Empirical estimation of a distribution function with truncated and doubly interval-censored data and its application to AIDS studies.

    PubMed

    Sun, J

    1995-09-01

    In this paper we discuss the non-parametric estimation of a distribution function based on incomplete data for which the measurement origin of a survival time or the date of enrollment in a study is known only to belong to an interval. Also the survival time of interest itself is observed from a truncated distribution and is known only to lie in an interval. To estimate the distribution function, a simple self-consistency algorithm, a generalization of Turnbull's (1976, Journal of the Royal Statistical Association, Series B 38, 290-295) self-consistency algorithm, is proposed. This method is then used to analyze two AIDS cohort studies, for which direct use of the EM algorithm (Dempster, Laird and Rubin, 1976, Journal of the Royal Statistical Association, Series B 39, 1-38), which is computationally complicated, has previously been the usual method of the analysis.

  6. Software Tools for Formal Specification and Verification of Distributed Real-Time Systems

    DTIC Science & Technology

    1994-07-29

    time systems and to evaluate the design. The evaluation of the design includes investigation of both the capability and potential usefulness of the toolkit environment and the feasibility of its implementation....The goals of Phase 1 are to design in detail a toolkit environment based on formal methods for the specification and verification of distributed real

  7. Cascaded clocks measurement and simulation findings

    NASA Technical Reports Server (NTRS)

    Chislow, Don; Zampetti, George

    1994-01-01

    This paper will examine aspects related to network synchronization distribution and the cascading of timing elements. Methods of timing distribution have become a much debated topic in standards forums and among network service providers (both domestically and internationally). Essentially these concerns focus on the need to migrate their existing network synchronization plans (and capabilities) to those required for the next generation of transport technologies (namely, the Synchronous Digital Hierarchy (SDH), Synchronous Optical Networks (SONET), and Asynchronous Transfer Mode (ATM). The particular choices for synchronization distribution network architectures are now being evaluated and are demonstrating that they can indeed have a profound effect on the overall service performance levels that will be delivered to the customer. The salient aspects of these concerns reduce to the following: (1) identifying that the devil is in the details of the timing element specifications and the distribution of timing information (i.e., small design choices can have a large performance impact); (2) developing a standardized method of performance verification that will yield unambiguous results; and (3) presentation of those results. Specifically, this will be done for two general cases: an ideal input, and a noisy input to a cascaded chain of slave clocks.

  8. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  9. A new method for calculating ecological flow: Distribution flow method

    NASA Astrophysics Data System (ADS)

    Tan, Guangming; Yi, Ran; Chang, Jianbo; Shu, Caiwen; Yin, Zhi; Han, Shasha; Feng, Zhiyong; Lyu, Yiwei

    2018-04-01

    A distribution flow method (DFM) and its ecological flow index and evaluation grade standard are proposed to study the ecological flow of rivers based on broadening kernel density estimation. The proposed DFM and its ecological flow index and evaluation grade standard are applied into the calculation of ecological flow in the middle reaches of the Yangtze River and compared with traditional calculation method of hydrological ecological flow, method of flow evaluation, and calculation result of fish ecological flow. Results show that the DFM considers the intra- and inter-annual variations in natural runoff, thereby reducing the influence of extreme flow and uneven flow distributions during the year. This method also satisfies the actual runoff demand of river ecosystems, demonstrates superiority over the traditional hydrological methods, and shows a high space-time applicability and application value.

  10. Multidisciplinary design optimisation of a recurve bow based on applications of the autogenetic design theory and distributed computing

    NASA Astrophysics Data System (ADS)

    Fritzsche, Matthias; Kittel, Konstantin; Blankenburg, Alexander; Vajna, Sándor

    2012-08-01

    The focus of this paper is to present a method of multidisciplinary design optimisation based on the autogenetic design theory (ADT) that provides methods, which are partially implemented in the optimisation software described here. The main thesis of the ADT is that biological evolution and the process of developing products are mainly similar, i.e. procedures from biological evolution can be transferred into product development. In order to fulfil requirements and boundary conditions of any kind (that may change at any time), both biological evolution and product development look for appropriate solution possibilities in a certain area, and try to optimise those that are actually promising by varying parameters and combinations of these solutions. As the time necessary for multidisciplinary design optimisations is a critical aspect in product development, ways to distribute the optimisation process with the effective use of unused calculating capacity, can reduce the optimisation time drastically. Finally, a practical example shows how ADT methods and distributed optimising are applied to improve a product.

  11. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    NASA Astrophysics Data System (ADS)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  12. Time series behaviour of the number of Air Asia passengers: A distributional approach

    NASA Astrophysics Data System (ADS)

    Asrah, Norhaidah Mohd; Djauhari, Maman Abdurachman

    2013-09-01

    The common practice to time series analysis is by fitting a model and then further analysis is conducted on the residuals. However, if we know the distributional behavior of time series, the analyses in model identification, parameter estimation, and model checking are more straightforward. In this paper, we show that the number of Air Asia passengers can be represented as a geometric Brownian motion process. Therefore, instead of using the standard approach in model fitting, we use an appropriate transformation to come up with a stationary, normally distributed and even independent time series. An example in forecasting the number of Air Asia passengers will be given to illustrate the advantages of the method.

  13. Temporal Decompostion of a Distribution System Quasi-Static Time-Series Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Hunsberger, Randolph J

    This paper documents the first phase of an investigation into reducing runtimes of complex OpenDSS models through parallelization. As the method seems promising, future work will quantify - and further mitigate - errors arising from this process. In this initial report, we demonstrate how, through the use of temporal decomposition, the run times of a complex distribution-system-level quasi-static time series simulation can be reduced roughly proportional to the level of parallelization. Using this method, the monolithic model runtime of 51 hours was reduced to a minimum of about 90 minutes. As expected, this comes at the expense of control- andmore » voltage-errors at the time-slice boundaries. All evaluations were performed using a real distribution circuit model with the addition of 50 PV systems - representing a mock complex PV impact study. We are able to reduce induced transition errors through the addition of controls initialization, though small errors persist. The time savings with parallelization are so significant that we feel additional investigation to reduce control errors is warranted.« less

  14. A quality-of-life-oriented endpoint for comparing therapies.

    PubMed

    Gelber, R D; Gelman, R S; Goldhirsch, A

    1989-09-01

    An endpoint, time without symptoms of disease and toxicity of treatment (TWiST), is defined to provide a single measure of length and quality of survival. Time with subjective side effects of treatment and time with unpleasant symptoms of disease are subtracted from overall survival time to calculate TWiST for each patient. The purpose of this paper is to describe the construction of this endpoint, and to elaborate on its interpretation for patient care decision-making. Estimating the distribution of TWiST using actuarial methods is shown by simulation studies to be biased as a result of induced dependency between TWiST and its censoring distribution. Considering the distribution of TWiST accumulated within a specified time from start of therapy, L, allows one to reduce this bias by substituting estimated TWiST for censored values and provides a method to evaluate the "payback" period for early toxic effects. Quantile distance plots provide graphical representations for treatment comparisons. The analysis of Ludwig Trial III evaluating toxic adjuvant therapies versus a no-treatment control group for postmenopausal women with node-positive breast cancer illustrates the methodology.

  15. Application of the Fractional Fourier Transform and S-Method in Doppler Radar Tomography

    DTIC Science & Technology

    2010-08-01

    Distribution Series WVD - Wigner - Ville Distribution xi DSTO–RR–0357 xii DSTO–RR...for chirped signals include bilinear tech- niques such as the Wigner - Ville Distribution ( WVD ), the Cohen’s class, and the time- frequency distribution ...2which is also known as ”‘ Wigner - Ville Distribution ”’ 3Assuming the integration extent in (13) is from −∞ to +∞ and using the property of the Dirac

  16. Characterization of intermittency in renewal processes: Application to earthquakes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji

    2010-03-15

    We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less

  17. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  18. Detecting changes in the spatial distribution of nitrate contamination in ground water

    USGS Publications Warehouse

    Liu, Z.-J.; Hallberg, G.R.; Zimmerman, D.L.; Libra, R.D.

    1997-01-01

    Many studies of ground water pollution in general and nitrate contamination in particular have often relied on a one-time investigation, tracking of individual wells, or aggregate summaries. Studies of changes in spatial distribution of contaminants over time are lacking. This paper presents a method to compare spatial distributions for possible changes over time. The large-scale spatial distribution at a given time can be considered as a surface over the area (a trend surface). The changes in spatial distribution from period to period can be revealed by the differences in the shape and/or height of surfaces. If such a surface is described by a polynomial function, changes in surfaces can be detected by testing statistically for differences in their corresponding polynomial functions. This method was applied to nitrate concentration in a population of wells in an agricultural drainage basin in Iowa, sampled in three different years. For the period of 1981-1992, the large-scale spatial distribution of nitrate concentration did not show significant change in the shape of spatial surfaces; while the magnitude of nitrate concentration in the basin, or height of the computed surfaces showed significant fluctuations. The change in magnitude of nitrate concentration is closely related to climatic variations, especially in precipitation. The lack of change in the shape of spatial surfaces means that either the influence of land use/nitrogen management was overshadowed by climatic influence, or the changes in land use/management occurred in a random fashion.

  19. A robust hybrid fuzzy-simulated annealing-intelligent water drops approach for tuning a distribution static compensator nonlinear controller in a distribution system

    NASA Astrophysics Data System (ADS)

    Bagheri Tolabi, Hajar; Hosseini, Rahil; Shakarami, Mahmoud Reza

    2016-06-01

    This article presents a novel hybrid optimization approach for a nonlinear controller of a distribution static compensator (DSTATCOM). The DSTATCOM is connected to a distribution system with the distributed generation units. The nonlinear control is based on partial feedback linearization. Two proportional-integral-derivative (PID) controllers regulate the voltage and track the output in this control system. In the conventional scheme, the trial-and-error method is used to determine the PID controller coefficients. This article uses a combination of a fuzzy system, simulated annealing (SA) and intelligent water drops (IWD) algorithms to optimize the parameters of the controllers. The obtained results reveal that the response of the optimized controlled system is effectively improved by finding a high-quality solution. The results confirm that using the tuning method based on the fuzzy-SA-IWD can significantly decrease the settling and rising times, the maximum overshoot and the steady-state error of the voltage step response of the DSTATCOM. The proposed hybrid tuning method for the partial feedback linearizing (PFL) controller achieved better regulation of the direct current voltage for the capacitor within the DSTATCOM. Furthermore, in the event of a fault the proposed controller tuned by the fuzzy-SA-IWD method showed better performance than the conventional controller or the PFL controller without optimization by the fuzzy-SA-IWD method with regard to both fault duration and clearing times.

  20. Change Semantic Constrained Online Data Cleaning Method for Real-Time Observational Data Stream

    NASA Astrophysics Data System (ADS)

    Ding, Yulin; Lin, Hui; Li, Rongrong

    2016-06-01

    Recent breakthroughs in sensor networks have made it possible to collect and assemble increasing amounts of real-time observational data by observing dynamic phenomena at previously impossible time and space scales. Real-time observational data streams present potentially profound opportunities for real-time applications in disaster mitigation and emergency response, by providing accurate and timeliness estimates of environment's status. However, the data are always subject to inevitable anomalies (including errors and anomalous changes/events) caused by various effects produced by the environment they are monitoring. The "big but dirty" real-time observational data streams can rarely achieve their full potential in the following real-time models or applications due to the low data quality. Therefore, timely and meaningful online data cleaning is a necessary pre-requisite step to ensure the quality, reliability, and timeliness of the real-time observational data. In general, a straightforward streaming data cleaning approach, is to define various types of models/classifiers representing normal behavior of sensor data streams and then declare any deviation from this model as normal or erroneous data. The effectiveness of these models is affected by dynamic changes of deployed environments. Due to the changing nature of the complicated process being observed, real-time observational data is characterized by diversity and dynamic, showing a typical Big (Geo) Data characters. Dynamics and diversity is not only reflected in the data values, but also reflected in the complicated changing patterns of the data distributions. This means the pattern of the real-time observational data distribution is not stationary or static but changing and dynamic. After the data pattern changed, it is necessary to adapt the model over time to cope with the changing patterns of real-time data streams. Otherwise, the model will not fit the following observational data streams, which may led to large estimation error. In order to achieve the best generalization error, it is an important challenge for the data cleaning methodology to be able to characterize the behavior of data stream distributions and adaptively update a model to include new information and remove old information. However, the complicated data changing property invalidates traditional data cleaning methods, which rely on the assumption of a stationary data distribution, and drives the need for more dynamic and adaptive online data cleaning methods. To overcome these shortcomings, this paper presents a change semantics constrained online filtering method for real-time observational data. Based on the principle that the filter parameter should vary in accordance to the data change patterns, this paper embeds semantic description, which quantitatively depicts the change patterns in the data distribution to self-adapt the filter parameter automatically. Real-time observational water level data streams of different precipitation scenarios are selected for testing. Experimental results prove that by means of this method, more accurate and reliable water level information can be available, which is prior to scientific and prompt flood assessment and decision-making.

  1. A log-Weibull spatial scan statistic for time to event data.

    PubMed

    Usman, Iram; Rosychuk, Rhonda J

    2018-06-13

    Spatial scan statistics have been used for the identification of geographic clusters of elevated numbers of cases of a condition such as disease outbreaks. These statistics accompanied by the appropriate distribution can also identify geographic areas with either longer or shorter time to events. Other authors have proposed the spatial scan statistics based on the exponential and Weibull distributions. We propose the log-Weibull as an alternative distribution for the spatial scan statistic for time to events data and compare and contrast the log-Weibull and Weibull distributions through simulation studies. The effect of type I differential censoring and power have been investigated through simulated data. Methods are also illustrated on time to specialist visit data for discharged patients presenting to emergency departments for atrial fibrillation and flutter in Alberta during 2010-2011. We found northern regions of Alberta had longer times to specialist visit than other areas. We proposed the spatial scan statistic for the log-Weibull distribution as a new approach for detecting spatial clusters for time to event data. The simulation studies suggest that the test performs well for log-Weibull data.

  2. Fortran programs for the time-dependent Gross-Pitaevskii equation in a fully anisotropic trap

    NASA Astrophysics Data System (ADS)

    Muruganandam, P.; Adhikari, S. K.

    2009-10-01

    Here we develop simple numerical algorithms for both stationary and non-stationary solutions of the time-dependent Gross-Pitaevskii (GP) equation describing the properties of Bose-Einstein condensates at ultra low temperatures. In particular, we consider algorithms involving real- and imaginary-time propagation based on a split-step Crank-Nicolson method. In a one-space-variable form of the GP equation we consider the one-dimensional, two-dimensional circularly-symmetric, and the three-dimensional spherically-symmetric harmonic-oscillator traps. In the two-space-variable form we consider the GP equation in two-dimensional anisotropic and three-dimensional axially-symmetric traps. The fully-anisotropic three-dimensional GP equation is also considered. Numerical results for the chemical potential and root-mean-square size of stationary states are reported using imaginary-time propagation programs for all the cases and compared with previously obtained results. Also presented are numerical results of non-stationary oscillation for different trap symmetries using real-time propagation programs. A set of convenient working codes developed in Fortran 77 are also provided for all these cases (twelve programs in all). In the case of two or three space variables, Fortran 90/95 versions provide some simplification over the Fortran 77 programs, and these programs are also included (six programs in all). Program summaryProgram title: (i) imagetime1d, (ii) imagetime2d, (iii) imagetime3d, (iv) imagetimecir, (v) imagetimesph, (vi) imagetimeaxial, (vii) realtime1d, (viii) realtime2d, (ix) realtime3d, (x) realtimecir, (xi) realtimesph, (xii) realtimeaxial Catalogue identifier: AEDU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 122 907 No. of bytes in distributed program, including test data, etc.: 609 662 Distribution format: tar.gz Programming language: FORTRAN 77 and Fortran 90/95 Computer: PC Operating system: Linux, Unix RAM: 1 GByte (i, iv, v), 2 GByte (ii, vi, vii, x, xi), 4 GByte (iii, viii, xii), 8 GByte (ix) Classification: 2.9, 4.3, 4.12 Nature of problem: These programs are designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-, two- or three-space dimensions with a harmonic, circularly-symmetric, spherically-symmetric, axially-symmetric or anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Solution method: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation, in either imaginary or real time, over small time steps. The method yields the solution of stationary and/or non-stationary problems. Additional comments: This package consists of 12 programs, see "Program title", above. FORTRAN77 versions are provided for each of the 12 and, in addition, Fortran 90/95 versions are included for ii, iii, vi, viii, ix, xii. For the particular purpose of each program please see the below. Running time: Minutes on a medium PC (i, iv, v, vii, x, xi), a few hours on a medium PC (ii, vi, viii, xii), days on a medium PC (iii, ix). Program summary (1)Title of program: imagtime1d.F Title of electronic file: imagtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (2)Title of program: imagtimecir.F Title of electronic file: imagtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (3)Title of program: imagtimesph.F Title of electronic file: imagtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 1 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (4)Title of program: realtime1d.F Title of electronic file: realtime1d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in one-space dimension with a harmonic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (5)Title of program: realtimecir.F Title of electronic file: realtimecir.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with a circularly-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (6)Title of program: realtimesph.F Title of electronic file: realtimesph.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 Typical running time: Minutes on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with a spherically-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (7)Title of programs: imagtimeaxial.F and imagtimeaxial.f90 Title of electronic file: imagtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (8)Title of program: imagtime2d.F and imagtime2d.f90 Title of electronic file: imagtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 2 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (9)Title of program: realtimeaxial.F and realtimeaxial.f90 Title of electronic file: realtimeaxial.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an axially-symmetric trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (10)Title of program: realtime2d.F and realtime2d.f90 Title of electronic file: realtime2d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Hours on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in two-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems. Program summary (11)Title of program: imagtime3d.F and imagtime3d.f90 Title of electronic file: imagtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum RAM memory: 4 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Few days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in imaginary time over small time steps. The method yields the solution of stationary problems. Program summary (12)Title of program: realtime3d.F and realtime3d.f90 Title of electronic file: realtime3d.tar.gz Catalogue identifier: Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Distribution format: tar.gz Computers: PC/Linux, workstation/UNIX Maximum Ram Memory: 8 GByte Programming language used: Fortran 77 and Fortran 90 Typical running time: Days on a medium PC Unusual features: None Nature of physical problem: This program is designed to solve the time-dependent Gross-Pitaevskii nonlinear partial differential equation in three-space dimensions with an anisotropic trap. The Gross-Pitaevskii equation describes the properties of a dilute trapped Bose-Einstein condensate. Method of solution: The time-dependent Gross-Pitaevskii equation is solved by the split-step Crank-Nicolson method by discretizing in space and time. The discretized equation is then solved by propagation in real time over small time steps. The method yields the solution of stationary and non-stationary problems.

  3. Statistics of baryon correlation functions in lattice QCD

    NASA Astrophysics Data System (ADS)

    Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration

    2017-12-01

    A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.

  4. A Comparative Study of Distribution System Parameter Estimation Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup

    2016-07-17

    In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less

  5. A two-step method for developing a control rod program for boiling water reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1992-01-01

    This paper reports on a two-step method that is established for the generation of a long-term control rod program for boiling water reactors (BWRs). The new method assumes a time-variant target power distribution in core depletion. In the new method, the BWR control rod programming is divided into two steps. In step 1, a sequence of optimal, exposure-dependent Haling power distribution profiles is generated, utilizing the spectral shift concept. In step 2, a set of exposure-dependent control rod patterns is developed by using the Haling profiles generated at step 1 as a target. The new method is implemented in amore » computer program named OCTOPUS. The optimization procedure of OCTOPUS is based on the method of approximation programming, in which the SIMULATE-E code is used to determine the nucleonics characteristics of the reactor core state. In a test in cycle length over a time-invariant, target Haling power distribution case because of a moderate application of spectral shift. No thermal limits of the core were violated. The gain in cycle length could be increased further by broadening the extent of the spetral shift.« less

  6. Influence of the weighing bar size to determine optimal time of biodiesel-glycerol separation by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sibagariang, Y.; Manurung, J.

    2018-02-01

    The buoyancy weighing-bar method is a novel method in the particle size distribution measurement. This method can measure particle size distributions of the settling particles and floating particles. In this study, the buoyancy weighing-bar method is applied to determine optimal time of biodiesel-glycerol separation. The buoyancy weighing-bar method can be applied to determine the separation time because biodiesel and glycerol have the different densities. The influences of diameter of weighing-bar by using the buoyancy weighing-bar method would be experimentally investigated. The diameters of weighing-bar in this experiment are 8 mm, 10 mm, 15 mm and 20 mm, while the graduated cylinder (diameter : 65 mm) is used as vessel. The samples used in this experiment are the mixture of 95 % of biodiesel and 5 % of glycerol. The data obtained by the buoyancy weighing-bar method are analized by using the gas chromatography to determine the purity of biodiesel. Based on the data obtained, the buoyancy weighing-bar method can be used to detect the separation time of biodiesel-glycerol by using the weighing-bar diameter of 8 mm, 10 mm, 15 mm and 20 mm, but the most accuracy in determination the biodiesel-glycerol separation time is obtained by using the weighing-bar diameter of 20 mm. The biodiesel purity of 97.97 % could be detected at 64 minutes by using the buoyancy weighing-bar method when the weighing-bar diameter of 20 mm is used.

  7. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOEpatents

    Petrini, Fabrizio [Los Alamos, NM; Feng, Wu-chun [Los Alamos, NM

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  8. Integer lattice gas with Monte Carlo collision operator recovers the lattice Boltzmann method with Poisson-distributed fluctuations

    NASA Astrophysics Data System (ADS)

    Blommel, Thomas; Wagner, Alexander J.

    2018-02-01

    We examine a new kind of lattice gas that closely resembles modern lattice Boltzmann methods. This new kind of lattice gas, which we call a Monte Carlo lattice gas, has interesting properties that shed light on the origin of the multirelaxation time collision operator, and it derives the equilibrium distribution for an entropic lattice Boltzmann. Furthermore these lattice gas methods have Galilean invariant fluctuations given by a Poisson statistics, giving further insight into the properties that we should expect for fluctuating lattice Boltzmann methods.

  9. Two-sided Topp-Leone Weibull distribution

    NASA Astrophysics Data System (ADS)

    Podeang, Krittaya; Bodhisuwan, Winai

    2017-11-01

    In this paper, we introduce a general class of lifetime distributions, called the two-sided Topp-Leone generated family of distribution. A special case of new family is the two-sided Topp-Leone Weibull distribution. This distribution used the two-sided Topp-Leone distribution as a generator for the Weibull distribution. The two-sided Topp-Leone Weibull distribution is presented in several shapes of distributions such as decreasing, unimodal, and bimodal which make this distribution more than flexible than the Weibull distribution. Its quantile function is presented. The parameter estimation method by using maximum likelihood estimation is discussed. The proposed distribution is applied to the strength data set, remission times of bladder cancer patients data set and time to failure of turbocharger data set. We compare the proposed distribution to the Topp-Leone Generated Weibull distribution. In conclusion, the two-sided Topp-Leone Weibull distribution performs similarly as the Topp-Leone Generated Weibull distribution in the first and second data sets. However, the proposed distribution can perform better than fit to Topp-Leone Generated Weibull distribution for the other.

  10. Residence times and age distributions of spring waters at the Semmering catchment area, Eastern Austria, as inferred from tritium, CFCs and stable isotopes.

    PubMed

    Han, Liangfeng; Hacker, Peter; Gröning, Manfred

    2007-03-01

    The groundwater system in the mountainous area of Semmering, Austria, was studied by environmental tracers in several karst springs. The tracers used included stable isotopes ((18)O, (2)H), tritium ((3)H) and chlorofluorocarbons (CFCs). The tracers provided valuable information in regard to (1) the mean altitude of the spring catchment areas; (2) the residence time and age distribution of the spring waters; and (3) the interconnection of the springs to a sinkhole. The combination of the stable isotopic data and the topography/geology provided the estimates of the mean altitudes of the catchment areas. Based on the stable isotopic data the recharge temperature of the spring waters was estimated. The smoothing of precipitation's isotopic signal in spring discharge provided information on the minimum transit time of the spring waters. Due to short observation time, (3)H data alone cannot be used for describing the mean residence time of the karst waters. CFCs, though useful in recognizing the co-existence of young (post-1993) water with old (CFC-free) water, could not be used to resolve age distribution models. It is shown in this article, however, that the combined use of tritium and CFCs can provide a better assessment of models to account for different groundwater age distributions. In Appendix A, a simplified method for collecting groundwater samples for the analysis of CFCs is described. The method provides a real facilitation for fieldwork. Test data are given for this sampling method in regard to potential contamination by atmospheric CFCs.

  11. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    NASA Astrophysics Data System (ADS)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; Cohen, Guy

    2018-03-01

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n -electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events.

  12. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE PAGES

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel; ...

    2018-03-06

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  13. Numerically exact full counting statistics of the nonequilibrium Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Michael; Singh, Viveka N.; Gull, Emanuel

    The time-dependent full counting statistics of charge transport through an interacting quantum junction is evaluated from its generating function, controllably computed with the inchworm Monte Carlo method. Exact noninteracting results are reproduced; then, we continue to explore the effect of electron-electron interactions on the time-dependent charge cumulants, first-passage time distributions, and n-electron transfer distributions. We observe a crossover in the noise from Coulomb blockade to Kondo-dominated physics as the temperature is decreased. In addition, we uncover long-tailed spin distributions in the Kondo regime and analyze queuing behavior caused by correlations between single-electron transfer events

  14. The error and bias of supplementing a short, arid climate, rainfall record with regional vs. global frequency analysis

    NASA Astrophysics Data System (ADS)

    Endreny, Theodore A.; Pashiardis, Stelios

    2007-02-01

    SummaryRobust and accurate estimates of rainfall frequencies are difficult to make with short, and arid-climate, rainfall records, however new regional and global methods were used to supplement such a constrained 15-34 yr record in Cyprus. The impact of supplementing rainfall frequency analysis with the regional and global approaches was measured with relative bias and root mean square error (RMSE) values. Analysis considered 42 stations with 8 time intervals (5-360 min) in four regions delineated by proximity to sea and elevation. Regional statistical algorithms found the sites passed discordancy tests of coefficient of variation, skewness and kurtosis, while heterogeneity tests revealed the regions were homogeneous to mildly heterogeneous. Rainfall depths were simulated in the regional analysis method 500 times, and then goodness of fit tests identified the best candidate distribution as the general extreme value (GEV) Type II. In the regional analysis, the method of L-moments was used to estimate location, shape, and scale parameters. In the global based analysis, the distribution was a priori prescribed as GEV Type II, a shape parameter was a priori set to 0.15, and a time interval term was constructed to use one set of parameters for all time intervals. Relative RMSE values were approximately equal at 10% for the regional and global method when regions were compared, but when time intervals were compared the global method RMSE had a parabolic-shaped time interval trend. Relative bias values were also approximately equal for both methods when regions were compared, but again a parabolic-shaped time interval trend was found for the global method. The global method relative RMSE and bias trended with time interval, which may be caused by fitting a single scale value for all time intervals.

  15. Calibrated tree priors for relaxed phylogenetics and divergence time estimation.

    PubMed

    Heled, Joseph; Drummond, Alexei J

    2012-01-01

    The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of divergence time estimation, and fossil evidence has been reinterpreted as the specification of prior distributions on the divergence times of calibration nodes. These so-called "soft calibrations" have become widely used but the statistical properties of calibrated tree priors in a Bayesian setting hashave not been carefully investigated. Here, we clarify that calibration densities, such as those defined in BEAST 1.5, do not represent the marginal prior distribution of the calibration node. We illustrate this with a number of analytical results on small trees. We also describe an alternative construction for a calibrated Yule prior on trees that allows direct specification of the marginal prior distribution of the calibrated divergence time, with or without the restriction of monophyly. This method requires the computation of the Yule prior conditional on the height of the divergence being calibrated. Unfortunately, a practical solution for multiple calibrations remains elusive. Our results suggest that direct estimation of the prior induced by specifying multiple calibration densities should be a prerequisite of any divergence time dating analysis.

  16. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    NASA Astrophysics Data System (ADS)

    Fraser, S. A.; Wood, N. J.; Johnston, D. M.; Leonard, G. S.; Greening, P. D.; Rossetto, T.

    2014-06-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate evacuation departure time or assumed a common departure time for all exposed population. In this paper, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The model is demonstrated for a case study of local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb-level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds can approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios.

  17. Preserving Institutional Privacy in Distributed binary Logistic Regression.

    PubMed

    Wu, Yuan; Jiang, Xiaoqian; Ohno-Machado, Lucila

    2012-01-01

    Privacy is becoming a major concern when sharing biomedical data across institutions. Although methods for protecting privacy of individual patients have been proposed, it is not clear how to protect the institutional privacy, which is many times a critical concern of data custodians. Built upon our previous work, Grid Binary LOgistic REgression (GLORE)1, we developed an Institutional Privacy-preserving Distributed binary Logistic Regression model (IPDLR) that considers both individual and institutional privacy for building a logistic regression model in a distributed manner. We tested our method using both simulated and clinical data, showing how it is possible to protect the privacy of individuals and of institutions using a distributed strategy.

  18. An automated model-based aim point distribution system for solar towers

    NASA Astrophysics Data System (ADS)

    Schwarzbözl, Peter; Rong, Amadeus; Macke, Ansgar; Säck, Jan-Peter; Ulmer, Steffen

    2016-05-01

    Distribution of heliostat aim points is a major task during central receiver operation, as the flux distribution produced by the heliostats varies continuously with time. Known methods for aim point distribution are mostly based on simple aim point patterns and focus on control strategies to meet local temperature and flux limits of the receiver. Lowering the peak flux on the receiver to avoid hot spots and maximizing thermal output are obviously competing targets that call for a comprehensive optimization process. This paper presents a model-based method for online aim point optimization that includes the current heliostat field mirror quality derived through an automated deflectometric measurement process.

  19. Analysis of Digital Communication Signals and Extraction of Parameters.

    DTIC Science & Technology

    1994-12-01

    Fast Fourier Transform (FFT). The correlation methods utilize modified time-frequency distributions , where one of these is based on the Wigner - Ville ... Distribution ( WVD ). Gaussian white noise is added to the signal to simulate various signal-to-noise ratios (SNRs).

  20. Enhancement of force patterns classification based on Gaussian distributions.

    PubMed

    Ertelt, Thomas; Solomonovs, Ilja; Gronwald, Thomas

    2018-01-23

    Description of the patterns of ground reaction force is a standard method in areas such as medicine, biomechanics and robotics. The fundamental parameter is the time course of the force, which is classified visually in particular in the field of clinical diagnostics. Here, the knowledge and experience of the diagnostician is relevant for its assessment. For an objective and valid discrimination of the ground reaction force pattern, a generic method, especially in the medical field, is absolutely necessary to describe the qualities of the time-course. The aim of the presented method was to combine the approaches of two existing procedures from the fields of machine learning and the Gauss approximation in order to take advantages of both methods for the classification of ground reaction force patterns. The current limitations of both methods could be eliminated by an overarching method. Twenty-nine male athletes from different sports were examined. Each participant was given the task of performing a one-legged stopping maneuver on a force plate from the maximum possible starting speed. The individual time course of the ground reaction force of each subject was registered and approximated on the basis of eight Gaussian distributions. The descriptive coefficients were then classified using Bayesian regulated neural networks. The different sports served as the distinguishing feature. Although the athletes were all given the same task, all sports referred to a different quality in the time course of ground reaction force. Meanwhile within each sport, the athletes were homogeneous. With an overall prediction (R = 0.938) all subjects/sports were classified correctly with 94.29% accuracy. The combination of the two methods: the mathematical description of the time course of ground reaction forces on the basis of Gaussian distributions and their classification by means of Bayesian regulated neural networks, seems an adequate and promising method to discriminate the ground reaction forces without any loss of information. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Exact posterior computation in non-conjugate Gaussian location-scale parameters models

    NASA Astrophysics Data System (ADS)

    Andrade, J. A. A.; Rathie, P. N.

    2017-12-01

    In Bayesian analysis the class of conjugate models allows to obtain exact posterior distributions, however this class quite restrictive in the sense that it involves only a few distributions. In fact, most of the practical applications involves non-conjugate models, thus approximate methods, such as the MCMC algorithms, are required. Although these methods can deal with quite complex structures, some practical problems can make their applications quite time demanding, for example, when we use heavy-tailed distributions, convergence may be difficult, also the Metropolis-Hastings algorithm can become very slow, in addition to the extra work inevitably required on choosing efficient candidate generator distributions. In this work, we draw attention to the special functions as a tools for Bayesian computation, we propose an alternative method for obtaining the posterior distribution in Gaussian non-conjugate models in an exact form. We use complex integration methods based on the H-function in order to obtain the posterior distribution and some of its posterior quantities in an explicit computable form. Two examples are provided in order to illustrate the theory.

  2. Height and the normal distribution: evidence from Italian military data.

    PubMed

    A'Hearn, Brian; Peracchi, Franco; Vecchi, Giovanni

    2009-02-01

    Researchers modeling historical heights have typically relied on the restrictive assumption of a normal distribution, only the mean of which is affected by age, income, nutrition, disease, and similar influences. To avoid these restrictive assumptions, we develop a new semiparametric approach in which covariates are allowed to affect the entire distribution without imposing any parametric shape. We apply our method to a new database of height distributions for Italian provinces, drawn from conscription records, of unprecedented length and geographical disaggregation. Our method allows us to standardize distributions to a single age and calculate moments of the distribution that are comparable through time. Our method also allows us to generate counterfactual distributions for a range of ages, from which we derive age-height profiles. These profiles reveal how the adolescent growth spurt (AGS) distorts the distribution of stature, and they document the earlier and earlier onset of the AGS as living conditions improved over the second half of the nineteenth century. Our new estimates of provincial mean height also reveal a previously unnoticed "regime switch "from regional convergence to divergence in this period.

  3. Staged fluidized bed

    DOEpatents

    Mallon, R.G.

    1983-05-13

    The invention relates to oil shale retorting and more particularly to staged fluidized bed oil shale retorting. Method and apparatus are disclosed for narrowing the distribution of residence times of any size particle and equalizing the residence times of large and small particles in fluidized beds. Particles are moved up one fluidized column and down a second fluidized column with the relative heights selected to equalize residence times of large and small particles. Additional pairs of columns are staged to narrow the distribution of residence times and provide complete processing of the material.

  4. The Cluster Variation Method: A Primer for Neuroscientists.

    PubMed

    Maren, Alianna J

    2016-09-30

    Effective Brain-Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables , is defined in terms of a single interaction enthalpy parameter ( h ) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found.

  5. The Cluster Variation Method: A Primer for Neuroscientists

    PubMed Central

    Maren, Alianna J.

    2016-01-01

    Effective Brain–Computer Interfaces (BCIs) require that the time-varying activation patterns of 2-D neural ensembles be modelled. The cluster variation method (CVM) offers a means for the characterization of 2-D local pattern distributions. This paper provides neuroscientists and BCI researchers with a CVM tutorial that will help them to understand how the CVM statistical thermodynamics formulation can model 2-D pattern distributions expressing structural and functional dynamics in the brain. The premise is that local-in-time free energy minimization works alongside neural connectivity adaptation, supporting the development and stabilization of consistent stimulus-specific responsive activation patterns. The equilibrium distribution of local patterns, or configuration variables, is defined in terms of a single interaction enthalpy parameter (h) for the case of an equiprobable distribution of bistate (neural/neural ensemble) units. Thus, either one enthalpy parameter (or two, for the case of non-equiprobable distribution) yields equilibrium configuration variable values. Modeling 2-D neural activation distribution patterns with the representational layer of a computational engine, we can thus correlate variational free energy minimization with specific configuration variable distributions. The CVM triplet configuration variables also map well to the notion of a M = 3 functional motif. This paper addresses the special case of an equiprobable unit distribution, for which an analytic solution can be found. PMID:27706022

  6. The harmonic impact of electric vehicle battery charging

    NASA Astrophysics Data System (ADS)

    Staats, Preston Trent

    The potential widespread introduction of the electric vehicle (EV) presents both opportunities and challenges to the power systems engineers who will be required to supply power to EV batteries. One of the challenges associated with EV battery charging comes from the potentially high harmonic currents associated with the conversion of ac power system voltages to dc EV battery voltages. Harmonic currents lead to increased losses in distribution circuits and reduced life expectancy of such power distribution components as capacitors and transformers. Harmonic current injections also cause harmonic voltages on power distribution networks. These distorted voltages can affect power system loads and specific standards exist regulating acceptable voltage distortion. This dissertation develops and presents the theory required to evaluate the electric vehicle battery charger as a harmonic distorting load and its possible harmonic impact on various aspects of power distribution systems. The work begins by developing a method for evaluating the net harmonic current injection of a large collection of EV battery chargers which accounts for variation in the start-time and initial battery state-of-charge between individual chargers. Next, this method is analyzed to evaluate the effect of input parameter variation on the net harmonic currents predicted by the model. We then turn to an evaluation of the impact of EV charger harmonic currents on power distribution systems, first evaluating the impact of these currents on a substation transformer and then on power distribution system harmonic voltages. The method presented accounts for the uncertainty in EV harmonic current injections by modeling the start-time and initial battery state-of-charge (SOC) of an individual EV battery charger as random variables. Thus, the net harmonic current, and distribution system harmonic voltages are formulated in a stochastic framework. Results indicate that considering variation in start-time and SOC leads to reduced estimates of harmonic current injection when compared to more traditional methods that do not account for variation. Evaluation of power distribution system harmonic voltages suggests that for any power distribution network there is a definite threshold penetration of EVs, below which the total harmonic distortion of voltage exceeds 5% at an insignificant number of buses. Thus, most existing distribution systems will probably be able to accommodate the early introduction of EV battery charging without widespread harmonic voltage problems.

  7. A Cascaded Approach for Correcting Ionospheric Contamination with Large Amplitude in HF Skywave Radars

    PubMed Central

    Wei, Yinsheng; Guo, Rujiang; Xu, Rongqing; Tang, Xiudong

    2014-01-01

    Ionospheric phase perturbation with large amplitude causes broadening sea clutter's Bragg peaks to overlap each other; the performance of traditional decontamination methods about filtering Bragg peak is poor, which greatly limits the detection performance of HF skywave radars. In view of the ionospheric phase perturbation with large amplitude, this paper proposes a cascaded approach based on improved S-method to correct the ionospheric phase contamination. This approach consists of two correction steps. At the first step, a time-frequency distribution method based on improved S-method is adopted and an optimal detection method is designed to obtain a coarse ionospheric modulation estimation from the time-frequency distribution. At the second correction step, based on the phase gradient algorithm (PGA) is exploited to eliminate the residual contamination. Finally, use the measured data to verify the effectiveness of the method. Simulation results show the time-frequency resolution of this method is high and is not affected by the interference of the cross term; ionospheric phase perturbation with large amplitude can be corrected in low signal-to-noise (SNR); such a cascade correction method has a good effect. PMID:24578656

  8. Dynamic approach to description of entrance channel effects in angular distributions of fission fragments

    NASA Astrophysics Data System (ADS)

    Eremenko, D. O.; Drozdov, V. A.; Fotina, O. V.; Platonov, S. Yu.; Yuminov, O. A.

    2016-07-01

    Background: It is well known that the anomalous behavior of angular anisotropies of fission fragments at sub- and near-barrier energies is associated with a memory of conditions in the entrance channel of the heavy-ion reactions, particularly, deformations and spins of colliding nuclei that determine the initial distributions for the components of the total angular momentum over the symmetry axis of the fissioning system and the beam axis. Purpose: We develop a new dynamic approach, which allows the description of the memory effects in the fission fragment angular distributions and provides new information on fusion and fission dynamics. Methods: The approach is based on the dynamic model of the fission fragment angular distributions which takes into account stochastic aspects of nuclear fission and thermal fluctuations for the tilting mode that is characterized by the projection of the total angular momentum onto the symmetry axis of the fissioning system. Another base of our approach is the quantum mechanical method to calculate the initial distributions over the components of the total angular momentum of the nuclear system immediately following complete fusion. Results: A method is suggested for calculating the initial distributions of the total angular momentum projection onto the symmetry axis for the nuclear systems formed in the reactions of complete fusion of deformed nuclei with spins. The angular distributions of fission fragments for the 16O+232Th,12C+235,236,238, and 13C+235U reactions have been analyzed within the dynamic approach over a range of sub- and above-barrier energies. The analysis allowed us to determine the relaxation time for the tilting mode and the fraction of fission events occurring in times not larger than the relaxation time for the tilting mode. Conclusions: It is shown that the memory effects play an important role in the formation of the angular distributions of fission fragments for the reactions induced by heavy ions. The approach developed for analysis of the effects is a suitable tool to get insight into the complete fusion-fission dynamics, in particular, to investigate the mechanism of the complete fusion and fission time scale.

  9. Damage identification method for continuous girder bridges based on spatially-distributed long-gauge strain sensing under moving loads

    NASA Astrophysics Data System (ADS)

    Wu, Bitao; Wu, Gang; Yang, Caiqian; He, Yi

    2018-05-01

    A novel damage identification method for concrete continuous girder bridges based on spatially-distributed long-gauge strain sensing is presented in this paper. First, the variation regularity of the long-gauge strain influence line of continuous girder bridges which changes with the location of vehicles on the bridge is studied. According to this variation regularity, a calculation method for the distribution regularity of the area of long-gauge strain history is investigated. Second, a numerical simulation of damage identification based on the distribution regularity of the area of long-gauge strain history is conducted, and the results indicate that this method is effective for identifying damage and is not affected by the speed, axle number and weight of vehicles. Finally, a real bridge test on a highway is conducted, and the experimental results also show that this method is very effective for identifying damage in continuous girder bridges, and the local element stiffness distribution regularity can be revealed at the same time. This identified information is useful for maintaining of continuous girder bridges on highways.

  10. Optimal Tikhonov regularization for DEER spectroscopy

    NASA Astrophysics Data System (ADS)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  11. Organization of the secure distributed computing based on multi-agent system

    NASA Astrophysics Data System (ADS)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  12. Estimating statistical uncertainty of Monte Carlo efficiency-gain in the context of a correlated sampling Monte Carlo code for brachytherapy treatment planning with non-normal dose distribution.

    PubMed

    Mukhopadhyay, Nitai D; Sampson, Andrew J; Deniz, Daniel; Alm Carlsson, Gudrun; Williamson, Jeffrey; Malusek, Alexandr

    2012-01-01

    Correlated sampling Monte Carlo methods can shorten computing times in brachytherapy treatment planning. Monte Carlo efficiency is typically estimated via efficiency gain, defined as the reduction in computing time by correlated sampling relative to conventional Monte Carlo methods when equal statistical uncertainties have been achieved. The determination of the efficiency gain uncertainty arising from random effects, however, is not a straightforward task specially when the error distribution is non-normal. The purpose of this study is to evaluate the applicability of the F distribution and standardized uncertainty propagation methods (widely used in metrology to estimate uncertainty of physical measurements) for predicting confidence intervals about efficiency gain estimates derived from single Monte Carlo runs using fixed-collision correlated sampling in a simplified brachytherapy geometry. A bootstrap based algorithm was used to simulate the probability distribution of the efficiency gain estimates and the shortest 95% confidence interval was estimated from this distribution. It was found that the corresponding relative uncertainty was as large as 37% for this particular problem. The uncertainty propagation framework predicted confidence intervals reasonably well; however its main disadvantage was that uncertainties of input quantities had to be calculated in a separate run via a Monte Carlo method. The F distribution noticeably underestimated the confidence interval. These discrepancies were influenced by several photons with large statistical weights which made extremely large contributions to the scored absorbed dose difference. The mechanism of acquiring high statistical weights in the fixed-collision correlated sampling method was explained and a mitigation strategy was proposed. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Estimating the extreme low-temperature event using nonparametric methods

    NASA Astrophysics Data System (ADS)

    D'Silva, Anisha

    This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.

  14. Use of the ventricular propagated excitation model in the magnetocardiographic inverse problem for reconstruction of electrophysiological properties.

    PubMed

    Ohyu, Shigeharu; Okamoto, Yoshiwo; Kuriki, Shinya

    2002-06-01

    A novel magnetocardiographic inverse method for reconstructing the action potential amplitude (APA) and the activation time (AT) on the ventricular myocardium is proposed. This method is based on the propagated excitation model, in which the excitation is propagated through the ventricle with nonuniform height of action potential. Assumption of stepwise waveform on the transmembrane potential was introduced in the model. Spatial gradient of transmembrane potential, which is defined by APA and AT distributed in the ventricular wall, is used for the computation of a current source distribution. Based on this source model, the distributions of APA and AT are inversely reconstructed from the QRS interval of magnetocardiogram (MCG) utilizing a maximum a posteriori approach. The proposed reconstruction method was tested through computer simulations. Stability of the methods with respect to measurement noise was demonstrated. When reference APA was provided as a uniform distribution, root-mean-square errors of estimated APA were below 10 mV for MCG signal-to-noise ratios greater than, or equal to, 20 dB. Low-amplitude regions located at several sites in reference APA distributions were correctly reproduced in reconstructed APA distributions. The goal of our study is to develop a method for detecting myocardial ischemia through the depression of reconstructed APA distributions.

  15. Comment on RamaRao et al. [1995] and LaVenue et al. [1995

    USGS Publications Warehouse

    Cooley, Richard L.; Hill, Mary C.

    2000-01-01

    A method for stochastic modeling of groundwater flow systems using a combination of pilot point parameterization and conditional simulation was presented by RamaRao et. al [1995] and LaVenue et. al [1995]. (We will collectively term these two papers RLMM and term the method developed in RLMM the CS method here.) RLMM (pp. 478-479) state that the CS method is intended to provide a frequency distribution of possible alternative spatial transmissivity T distributions that (1) are statistically similar to the observed T distribution, (2) are equally likely given the calibration data, and (3) closely reproduce the measured pressures. The frequency distribution of transmissivities is then used to form frequency distribution of derived functions, such as travel times, which are summarized in the form of uncertainty measures, such as (RLMM, p. 512) "confidence (or tolerance) intervals," on the actual values, which in the case of travels times, are values that could occur sometime in the future (RLMM, p.513). Cooley [2000] analyzes the RLMM method using linearization and bootstrap theory and concludes that that the method can yield accurate uncertainty estimates but only under some limited circumstances.  In this comment we use Cooley's analysis to critique the method. We also identify and discuss some statements made my by RLMM about model calibration and their methodology that appear to be misleading. It is unusual to comment on a paper so long after publication. Subsequent work has expanded the method of RLMM, and the method was used advantageously in the testing documented by Zimmerman et. al [1998], so it is clear the method has significant strengths. We go back to the 1995 papers for this comment, however, because they display most clearly the methodological difficulties with which we are concerned. 

  16. Real-time distributed fiber microphone based on phase-OTDR.

    PubMed

    Franciscangelis, Carolina; Margulis, Walter; Kjellberg, Leif; Soderquist, Ingemar; Fruett, Fabiano

    2016-12-26

    The use of an optical fiber as a real-time distributed microphone is demonstrated employing a phase-OTDR with direct detection. The method comprises a sample-and-hold circuit capable of both tuning the receiver to an arbitrary section of the fiber considered of interest and to recover in real-time the detected acoustic wave. The system allows listening to the sound of a sinusoidal disturbance with variable frequency, music and human voice with ~60 cm of spatial resolution through a 300 m long optical fiber.

  17. Distributed traffic signal control using fuzzy logic

    NASA Technical Reports Server (NTRS)

    Chiu, Stephen

    1992-01-01

    We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.

  18. Estimation of Parameters from Discrete Random Nonstationary Time Series

    NASA Astrophysics Data System (ADS)

    Takayasu, H.; Nakamura, T.

    For the analysis of nonstationary stochastic time series we introduce a formulation to estimate the underlying time-dependent parameters. This method is designed for random events with small numbers that are out of the applicability range of the normal distribution. The method is demonstrated for numerical data generated by a known system, and applied to time series of traffic accidents, batting average of a baseball player and sales volume of home electronics.

  19. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  20. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  1. Numerical simulation code for self-gravitating Bose-Einstein condensates

    NASA Astrophysics Data System (ADS)

    Madarassy, Enikő J. M.; Toth, Viktor T.

    2013-04-01

    We completed the development of simulation code that is designed to study the behavior of a conjectured dark matter galactic halo that is in the form of a Bose-Einstein Condensate (BEC). The BEC is described by the Gross-Pitaevskii equation, which can be solved numerically using the Crank-Nicholson method. The gravitational potential, in turn, is described by Poisson’s equation, that can be solved using the relaxation method. Our code combines these two methods to study the time evolution of a self-gravitating BEC. The inefficiency of the relaxation method is balanced by the fact that in subsequent time iterations, previously computed values of the gravitational field serve as very good initial estimates. The code is robust (as evidenced by its stability on coarse grids) and efficient enough to simulate the evolution of a system over the course of 109 years using a finer (100×100×100) spatial grid, in less than a day of processor time on a contemporary desktop computer. Catalogue identifier: AEOR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5248 No. of bytes in distributed program, including test data, etc.: 715402 Distribution format: tar.gz Programming language: C++ or FORTRAN. Computer: PCs or workstations. Operating system: Linux or Windows. Classification: 1.5. Nature of problem: Simulation of a self-gravitating Bose-Einstein condensate by simultaneous solution of the Gross-Pitaevskii and Poisson equations in three dimensions. Solution method: The Gross-Pitaevskii equation is solved numerically using the Crank-Nicholson method; Poisson’s equation is solved using the relaxation method. The time evolution of the system is governed by the Gross-Pitaevskii equation; the solution of Poisson’s equation at each time step is used as an initial estimate for the next time step, which dramatically increases the efficiency of the relaxation method. Running time: Depends on the chosen size of the problem. On a typical personal computer, a 100×100×100 grid can be solved with a time span of 10 Gyr in approx. a day of running time.

  2. Determining the multi-scale hedge ratios of stock index futures using the lower partial moments method

    NASA Astrophysics Data System (ADS)

    Dai, Jun; Zhou, Haigang; Zhao, Shaoquan

    2017-01-01

    This paper considers a multi-scale future hedge strategy that minimizes lower partial moments (LPM). To do this, wavelet analysis is adopted to decompose time series data into different components. Next, different parametric estimation methods with known distributions are applied to calculate the LPM of hedged portfolios, which is the key to determining multi-scale hedge ratios over different time scales. Then these parametric methods are compared with the prevailing nonparametric kernel metric method. Empirical results indicate that in the China Securities Index 300 (CSI 300) index futures and spot markets, hedge ratios and hedge efficiency estimated by the nonparametric kernel metric method are inferior to those estimated by parametric hedging model based on the features of sequence distributions. In addition, if minimum-LPM is selected as a hedge target, the hedging periods, degree of risk aversion, and target returns can affect the multi-scale hedge ratios and hedge efficiency, respectively.

  3. An optimal control strategy for hybrid actuator systems: Application to an artificial muscle with electric motor assist.

    PubMed

    Ishihara, Koji; Morimoto, Jun

    2018-03-01

    Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  4. Multistep Lattice-Voxel method utilizing lattice function for Monte-Carlo treatment planning with pixel based voxel model.

    PubMed

    Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K

    2011-12-01

    Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Estimation of time-delayed mutual information and bias for irregularly and sparsely sampled time-series

    PubMed Central

    Albers, D. J.; Hripcsak, George

    2012-01-01

    A method to estimate the time-dependent correlation via an empirical bias estimate of the time-delayed mutual information for a time-series is proposed. In particular, the bias of the time-delayed mutual information is shown to often be equivalent to the mutual information between two distributions of points from the same system separated by infinite time. Thus intuitively, estimation of the bias is reduced to estimation of the mutual information between distributions of data points separated by large time intervals. The proposed bias estimation techniques are shown to work for Lorenz equations data and glucose time series data of three patients from the Columbia University Medical Center database. PMID:22536009

  6. Distributed Processing of Projections of Large Datasets: A Preliminary Study

    USGS Publications Warehouse

    Maddox, Brian G.

    2004-01-01

    Modern information needs have resulted in very large amounts of data being used in geographic information systems. Problems arise when trying to project these data in a reasonable amount of time and accuracy, however. Current single-threaded methods can suffer from two problems: fast projection with poor accuracy, or accurate projection with long processing time. A possible solution may be to combine accurate interpolation methods and distributed processing algorithms to quickly and accurately convert digital geospatial data between coordinate systems. Modern technology has made it possible to construct systems, such as Beowulf clusters, for a low cost and provide access to supercomputer-class technology. Combining these techniques may result in the ability to use large amounts of geographic data in time-critical situations.

  7. Early efforts in modeling the incubation period of infectious diseases with an acute course of illness.

    PubMed

    Nishiura, Hiroshi

    2007-05-11

    The incubation period of infectious diseases, the time from infection with a microorganism to onset of disease, is directly relevant to prevention and control. Since explicit models of the incubation period enhance our understanding of the spread of disease, previous classic studies were revisited, focusing on the modeling methods employed and paying particular attention to relatively unknown historical efforts. The earliest study on the incubation period of pandemic influenza was published in 1919, providing estimates of the incubation period of Spanish flu using the daily incidence on ships departing from several ports in Australia. Although the study explicitly dealt with an unknown time of exposure, the assumed periods of exposure, which had an equal probability of infection, were too long, and thus, likely resulted in slight underestimates of the incubation period. After the suggestion that the incubation period follows lognormal distribution, Japanese epidemiologists extended this assumption to estimates of the time of exposure during a point source outbreak. Although the reason why the incubation period of acute infectious diseases tends to reveal a right-skewed distribution has been explored several times, the validity of the lognormal assumption is yet to be fully clarified. At present, various different distributions are assumed, and the lack of validity in assuming lognormal distribution is particularly apparent in the case of slowly progressing diseases. The present paper indicates that (1) analysis using well-defined short periods of exposure with appropriate statistical methods is critical when the exact time of exposure is unknown, and (2) when assuming a specific distribution for the incubation period, comparisons using different distributions are needed in addition to estimations using different datasets, analyses of the determinants of incubation period, and an understanding of the underlying disease mechanisms.

  8. An Improved Time-Frequency Analysis Method in Interference Detection for GNSS Receivers

    PubMed Central

    Sun, Kewen; Jin, Tian; Yang, Dongkai

    2015-01-01

    In this paper, an improved joint time-frequency (TF) analysis method based on a reassigned smoothed pseudo Wigner–Ville distribution (RSPWVD) has been proposed in interference detection for Global Navigation Satellite System (GNSS) receivers. In the RSPWVD, the two-dimensional low-pass filtering smoothing function is introduced to eliminate the cross-terms present in the quadratic TF distribution, and at the same time, the reassignment method is adopted to improve the TF concentration properties of the auto-terms of the signal components. This proposed interference detection method is evaluated by experiments on GPS L1 signals in the disturbing scenarios compared to the state-of-the-art interference detection approaches. The analysis results show that the proposed interference detection technique effectively overcomes the cross-terms problem and also preserves good TF localization properties, which has been proven to be effective and valid to enhance the interference detection performance of the GNSS receivers, particularly in the jamming environments. PMID:25905704

  9. Comparison of time-frequency distribution techniques for analysis of spinal somatosensory evoked potential.

    PubMed

    Hu, Y; Luk, K D; Lu, W W; Holmes, A; Leong, J C

    2001-05-01

    Spinal somatosensory evoked potential (SSEP) has been employed to monitor the integrity of the spinal cord during surgery. To detect both temporal and spectral changes in SSEP waveforms, an investigation of the application of time-frequency analysis (TFA) techniques was conducted. SSEP signals from 30 scoliosis patients were analysed using different techniques; short time Fourier transform (STFT), Wigner-Ville distribution (WVD), Choi-Williams distribution (CWD), cone-shaped distribution (CSD) and adaptive spectrogram (ADS). The time-frequency distributions (TFD) computed using these methods were assessed and compared with each other. WVD, ADS, CSD and CWD showed better resolution than STFT. Comparing normalised peak widths, CSD showed the sharpest peak width (0.13+/-0.1) in the frequency dimension, and a mean peak width of 0.70+/-0.12 in the time dimension. Both WVD and CWD produced cross-term interference, distorting the TFA distribution, but this was not seen with CSD and ADS. CSD appeared to give a lower mean peak power bias (10.3%+/-6.2%) than ADS (41.8%+/-19.6%). Application of the CSD algorithm showed both good resolution and accurate spectrograms, and is therefore recommended as the most appropriate TFA technique for the analysis of SSEP signals.

  10. Fast determination of the spatially distributed photon fluence for light dose evaluation of PDT

    NASA Astrophysics Data System (ADS)

    Zhao, Kuanxin; Chen, Weiting; Li, Tongxin; Yan, Panpan; Qin, Zhuanping; Zhao, Huijuan

    2018-02-01

    Photodynamic therapy (PDT) has shown superiorities of noninvasiveness and high-efficiency in the treatment of early-stage skin cancer. Rapid and accurate determination of spatially distributed photon fluence in turbid tissue is essential for the dosimetry evaluation of PDT. It is generally known that photon fluence can be accurately obtained by Monte Carlo (MC) methods, while too much time would be consumed especially for complex light source mode or online real-time dosimetry evaluation of PDT. In this work, a method to rapidly calculate spatially distributed photon fluence in turbid medium is proposed implementing a classical perturbation and iteration theory on mesh Monte Carlo (MMC). In the proposed method, photon fluence can be obtained by superposing a perturbed and iterative solution caused by the defects in turbid medium to an unperturbed solution for the background medium and therefore repetitive MMC simulations can be avoided. To validate the method, a non-melanoma skin cancer model is carried out. The simulation results show the solution of photon fluence can be obtained quickly and correctly by perturbation algorithm.

  11. On the efficacy of procedures to normalize Ex-Gaussian distributions

    PubMed Central

    Marmolejo-Ramos, Fernando; Cousineau, Denis; Benites, Luis; Maehara, Rocío

    2015-01-01

    Reaction time (RT) is one of the most common types of measure used in experimental psychology. Its distribution is not normal (Gaussian) but resembles a convolution of normal and exponential distributions (Ex-Gaussian). One of the major assumptions in parametric tests (such as ANOVAs) is that variables are normally distributed. Hence, it is acknowledged by many that the normality assumption is not met. This paper presents different procedures to normalize data sampled from an Ex-Gaussian distribution in such a way that they are suitable for parametric tests based on the normality assumption. Using simulation studies, various outlier elimination and transformation procedures were tested against the level of normality they provide. The results suggest that the transformation methods are better than elimination methods in normalizing positively skewed data and the more skewed the distribution then the transformation methods are more effective in normalizing such data. Specifically, transformation with parameter lambda -1 leads to the best results. PMID:25709588

  12. Real-time determination of the efficacy of residual disinfection to limit wastewater contamination in a water distribution system using filtration-based luminescence.

    PubMed

    Lee, Jiyoung; Deininger, Rolf A

    2010-05-01

    Water distribution systems can be vulnerable to microbial contamination through cross-connections, wastewater backflow, the intrusion of soiled water after a loss of pressure resulting from an electricity blackout, natural disaster, or intentional contamination of the system in a bioterrrorism event. The most urgent matter a water treatment utility would face in this situation is detecting the presence and extent of a contamination event in real-time, so that immediate action can be taken to mitigate the problem. The current approved microbiological detection methods are culture-based plate count methods, which require incubation time (1 to 7 days). This long period of time would not be useful for the protection of public health. This study was designed to simulate wastewater intrusion in a water distribution system. The objectives were 2-fold: (1) real-time detection of water contamination, and (2) investigation of the sustainability of drinking water systems to suppress the contamination with secondary disinfectant residuals (chlorine and chloramine). The events of drinking water contamination resulting from a wastewater addition were determined by filtration-based luminescence assay. The water contamination was detected by luminescence method within 5 minutes. The signal amplification attributed to wastewater contamination was clear-102-fold signal increase. After 1 hour, chlorinated water could inactivate 98.8% of the bacterial contaminant, while chloraminated water reduced 77.2%.

  13. A Comparison of Pseudo-Maximum Likelihood and Asymptotically Distribution-Free Dynamic Factor Analysis Parameter Estimation in Fitting Covariance Structure Models to Block-Toeplitz Matrices Representing Single-Subject Multivariate Time-Series.

    ERIC Educational Resources Information Center

    Molenaar, Peter C. M.; Nesselroade, John R.

    1998-01-01

    Pseudo-Maximum Likelihood (p-ML) and Asymptotically Distribution Free (ADF) estimation methods for estimating dynamic factor model parameters within a covariance structure framework were compared through a Monte Carlo simulation. Both methods appear to give consistent model parameter estimates, but only ADF gives standard errors and chi-square…

  14. Numerical Investigation of the Ability of Salt Tracers to Represent the Residence Time Distribution of Fluidized Catalytic Cracking Particles

    DOE PAGES

    Lu, Liqiang; Gao, Xi; Li, Tingwen; ...

    2017-11-02

    For a long time, salt tracers have been used to measure the residence time distribution (RTD) of fluidized catalytic cracking (FCC) particles. However, due to limitations in experimental measurements and simulation methods, the ability of salt tracers to faithfully represent RTDs has never been directly investigated. Our current simulation results using coarse-grained computational fluid dynamic coupled with discrete element method (CFD-DEM) with filtered drag models show that the residence time of salt tracers with the same terminal velocity as FCC particles is slightly larger than that of FCC particles. This research also demonstrates the ability of filtered drag models tomore » predict the correct RTD curve for FCC particles while the homogeneous drag model may only be used in the dilute riser flow of Geldart type B particles. The RTD of large-scale reactors can then be efficiently investigated with our proposed numerical method as well as by using the old-fashioned salt tracer technology.« less

  15. Statistical analysis of water-quality data containing multiple detection limits II: S-language software for nonparametric distribution modeling and hypothesis testing

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2007-01-01

    Analysis of low concentrations of trace contaminants in environmental media often results in left-censored data that are below some limit of analytical precision. Interpretation of values becomes complicated when there are multiple detection limits in the data-perhaps as a result of changing analytical precision over time. Parametric and semi-parametric methods, such as maximum likelihood estimation and robust regression on order statistics, can be employed to model distributions of multiply censored data and provide estimates of summary statistics. However, these methods are based on assumptions about the underlying distribution of data. Nonparametric methods provide an alternative that does not require such assumptions. A standard nonparametric method for estimating summary statistics of multiply-censored data is the Kaplan-Meier (K-M) method. This method has seen widespread usage in the medical sciences within a general framework termed "survival analysis" where it is employed with right-censored time-to-failure data. However, K-M methods are equally valid for the left-censored data common in the geosciences. Our S-language software provides an analytical framework based on K-M methods that is tailored to the needs of the earth and environmental sciences community. This includes routines for the generation of empirical cumulative distribution functions, prediction or exceedance probabilities, and related confidence limits computation. Additionally, our software contains K-M-based routines for nonparametric hypothesis testing among an unlimited number of grouping variables. A primary characteristic of K-M methods is that they do not perform extrapolation and interpolation. Thus, these routines cannot be used to model statistics beyond the observed data range or when linear interpolation is desired. For such applications, the aforementioned parametric and semi-parametric methods must be used.

  16. A Compressed Sensing Based Method for Reducing the Sampling Time of A High Resolution Pressure Sensor Array System

    PubMed Central

    Sun, Chenglu; Li, Wei; Chen, Wei

    2017-01-01

    For extracting the pressure distribution image and respiratory waveform unobtrusively and comfortably, we proposed a smart mat which utilized a flexible pressure sensor array, printed electrodes and novel soft seven-layer structure to monitor those physiological information. However, in order to obtain high-resolution pressure distribution and more accurate respiratory waveform, it needs more time to acquire the pressure signal of all the pressure sensors embedded in the smart mat. In order to reduce the sampling time while keeping the same resolution and accuracy, a novel method based on compressed sensing (CS) theory was proposed. By utilizing the CS based method, 40% of the sampling time can be decreased by means of acquiring nearly one-third of original sampling points. Then several experiments were carried out to validate the performance of the CS based method. While less than one-third of original sampling points were measured, the correlation degree coefficient between reconstructed respiratory waveform and original waveform can achieve 0.9078, and the accuracy of the respiratory rate (RR) extracted from the reconstructed respiratory waveform can reach 95.54%. The experimental results demonstrated that the novel method can fit the high resolution smart mat system and be a viable option for reducing the sampling time of the pressure sensor array. PMID:28796188

  17. Benefits of polidocanol endovenous microfoam (Varithena®) compared with physician-compounded foams

    PubMed Central

    Carugo, Dario; Ankrett, Dyan N; Zhao, Xuefeng; Zhang, Xunli; Hill, Martyn; O’Byrne, Vincent; Hoad, James; Arif, Mehreen; Wright, David DI

    2015-01-01

    Objective To compare foam bubble size and bubble size distribution, stability, and degradation rate of commercially available polidocanol endovenous microfoam (Varithena®) and physician-compounded foams using a number of laboratory tests. Methods Foam properties of polidocanol endovenous microfoam and physician-compounded foams were measured and compared using a glass-plate method and a Sympatec QICPIC image analysis method to measure bubble size and bubble size distribution, Turbiscan™ LAB for foam half time and drainage and a novel biomimetic vein model to measure foam stability. Physician-compounded foams composed of polidocanol and room air, CO2, or mixtures of oxygen and carbon dioxide (O2:CO2) were generated by different methods. Results Polidocanol endovenous microfoam was found to have a narrow bubble size distribution with no large (>500 µm) bubbles. Physician-compounded foams made with the Tessari method had broader bubble size distribution and large bubbles, which have an impact on foam stability. Polidocanol endovenous microfoam had a lower degradation rate than any physician-compounded foams, including foams made using room air (p < 0.035). The same result was obtained at different liquid to gas ratios (1:4 and 1:7) for physician-compounded foams. In all tests performed, CO2 foams were the least stable and different O2:CO2 mixtures had intermediate performance. In the biomimetic vein model, polidocanol endovenous microfoam had the slowest degradation rate and longest calculated dwell time, which represents the length of time the foam is in contact with the vein, almost twice that of physician-compounded foams using room air and eight times better than physician-compounded foams prepared using equivalent gas mixes. Conclusion Bubble size, bubble size distribution and stability of various sclerosing foam formulations show that polidocanol endovenous microfoam results in better overall performance compared with physician-compounded foams. Polidocanol endovenous microfoam offers better stability and cohesive properties in a biomimetic vein model compared to physician-compounded foams. Polidocanol endovenous microfoam, which is indicated in the United States for treatment of great saphenous vein system incompetence, provides clinicians with a consistent product with enhanced handling properties. PMID:26036246

  18. Two Person Zero-Sum Semi-Markov Games with Unknown Holding Times Distribution on One Side: A Discounted Payoff Criterion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando

    This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.

  19. [Three-dimensional stress analysis of periodontal ligament of mandible incisors fixed bridge abutments under dynamic loads by finite element method].

    PubMed

    Ma, Da; Tang, Liang; Pan, Yan-Huan

    2007-12-01

    Three-dimensional finite method was used to analyze stress and strain distributions of periodontal ligament of abutments under dynamic loads. Finite element analysis was performed on the model under dynamic loads with vertical and oblique directions. The stress and strain distributions and stress-time curves were analyzed to study the biomechanical behavior of periodontal ligament of abutments. The stress and strain distributions of periodontal ligament under dynamic load were same with the static load. But the maximum stress and strain decreased apparently. The rate of change was between 60%-75%. The periodontal ligament had time-dependent mechanical behaviors. Some level of residual stress in periodontal ligament was left after one mastication period. The stress-free time under oblique load was shorter than that of vertical load. The maximum stress and strain decrease apparently under dynamic loads. The periodontal ligament has time-dependent mechanical behaviors during one mastication. There is some level of residual stress left after one mastication period. The level of residual stress is related to the magnitude and the direction of loads. The direction of applied loads is one important factor that affected the stress distribution and accumulation and release of abutment periodontal ligament.

  20. Conceptual framework for model-based analysis of residence time distribution in twin-screw granulation.

    PubMed

    Kumar, Ashish; Vercruysse, Jurgen; Vanhoorne, Valérie; Toiviainen, Maunu; Panouillot, Pierre-Emmanuel; Juuti, Mikko; Vervaet, Chris; Remon, Jean Paul; Gernaey, Krist V; De Beer, Thomas; Nopens, Ingmar

    2015-04-25

    Twin-screw granulation is a promising continuous alternative for traditional batchwise wet granulation processes. The twin-screw granulator (TSG) screws consist of transport and kneading element modules. Therefore, the granulation to a large extent is governed by the residence time distribution within each module where different granulation rate processes dominate over others. Currently, experimental data is used to determine the residence time distributions. In this study, a conceptual model based on classical chemical engineering methods is proposed to better understand and simulate the residence time distribution in a TSG. The experimental data were compared with the proposed most suitable conceptual model to estimate the parameters of the model and to analyse and predict the effects of changes in number of kneading discs and their stagger angle, screw speed and powder feed rate on residence time. The study established that the kneading block in the screw configuration acts as a plug-flow zone inside the granulator. Furthermore, it was found that a balance between the throughput force and conveying rate is required to obtain a good axial mixing inside the twin-screw granulator. Although the granulation behaviour is different for other excipients, the experimental data collection and modelling methods applied in this study are generic and can be adapted to other excipients. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Biased random key genetic algorithm with insertion and gender selection for capacitated vehicle routing problem with time windows

    NASA Astrophysics Data System (ADS)

    Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri

    2017-06-01

    Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

  2. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  3. Multigrid contact detection method

    NASA Astrophysics Data System (ADS)

    He, Kejing; Dong, Shoubin; Zhou, Zhaoyao

    2007-03-01

    Contact detection is a general problem of many physical simulations. This work presents a O(N) multigrid method for general contact detection problems (MGCD). The multigrid idea is integrated with contact detection problems. Both the time complexity and memory consumption of the MGCD are O(N) . Unlike other methods, whose efficiencies are influenced strongly by the object size distribution, the performance of MGCD is insensitive to the object size distribution. We compare the MGCD with the no binary search (NBS) method and the multilevel boxing method in three dimensions for both time complexity and memory consumption. For objects with similar size, the MGCD is as good as the NBS method, both of which outperform the multilevel boxing method regarding memory consumption. For objects with diverse size, the MGCD outperform both the NBS method and the multilevel boxing method. We use the MGCD to solve the contact detection problem for a granular simulation system based on the discrete element method. From this granular simulation, we get the density property of monosize packing and binary packing with size ratio equal to 10. The packing density for monosize particles is 0.636. For binary packing with size ratio equal to 10, when the number of small particles is 300 times as the number of big particles, the maximal packing density 0.824 is achieved.

  4. Load sharing in distributed real-time systems with state-change broadcasts

    NASA Technical Reports Server (NTRS)

    Shin, Kang G.; Chang, Yi-Chieh

    1989-01-01

    A decentralized dynamic load-sharing (LS) method based on state-change broadcasts is proposed for a distributed real-time system. Whenever the state of a node changes from underloaded to fully loaded and vice versa, the node broadcasts this change to a set of nodes, called a buddy set, in the system. The performance of the method is evaluated with both analytic modeling and simulation. It is modeled first by an embedded Markov chain for which numerical solutions are derived. The model solutions are then used to calculate the distribution of queue lengths at the nodes and the probability of meeting task deadlines. The analytical results show that buddy sets of 10 nodes outperform those of less than 10 nodes, and the incremental benefit gained from increasing the buddy set size beyond 15 nodes is insignificant. These and other analytical results are verified by simulation. The proposed LS method is shown to meet task deadlines with a very high probability.

  5. Distributed phase birefringence measurements based on polarization correlation in phase-sensitive optical time-domain reflectometers.

    PubMed

    Soto, Marcelo A; Lu, Xin; Martins, Hugo F; Gonzalez-Herraez, Miguel; Thévenaz, Luc

    2015-09-21

    In this paper a technique to measure the distributed birefringence profile along optical fibers is proposed and experimentally validated. The method is based on the spectral correlation between two sets of orthogonally-polarized measurements acquired using a phase-sensitive optical time-domain reflectometer (ϕOTDR). The correlation between the two measured spectra gives a resonance (correlation) peak at a frequency detuning that is proportional to the local refractive index difference between the two orthogonal polarization axes of the fiber. In this way the method enables local phase birefringence measurements at any position along optical fibers, so that any longitudinal fluctuation can be precisely evaluated with metric spatial resolution. The method has been experimentally validated by measuring fibers with low and high birefringence, such as standard single-mode fibers as well as conventional polarization-maintaining fibers. The technique has potential applications in the characterization of optical fibers for telecommunications as well as in distributed optical fiber sensing.

  6. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  7. A cross-correlation-based estimate of the galaxy luminosity function

    NASA Astrophysics Data System (ADS)

    van Daalen, Marcel P.; White, Martin

    2018-06-01

    We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.

  8. Estimating occupancy and predicting numbers of gray wolf packs in Montana using hunter surveys

    USGS Publications Warehouse

    Rich, Lindsey N.; Russell, Robin E.; Glenn, Elizabeth M.; Mitchell, Michael S.; Gude, Justin A.; Podruzny, Kevin M.; Sime, Carolyn A.; Laudon, Kent; Ausband, David E.; Nichols, James D.

    2013-01-01

    Reliable knowledge of the status and trend of carnivore populations is critical to their conservation and management. Methods for monitoring carnivores, however, are challenging to conduct across large spatial scales. In the Northern Rocky Mountains, wildlife managers need a time- and cost-efficient method for monitoring gray wolf (Canis lupus) populations. Montana Fish, Wildlife and Parks (MFWP) conducts annual telephone surveys of >50,000 deer and elk hunters. We explored how survey data on hunters' sightings of wolves could be used to estimate the occupancy and distribution of wolf packs and predict their abundance in Montana for 2007–2009. We assessed model utility by comparing our predictions to MFWP minimum known number of wolf packs. We minimized false positive detections by identifying a patch as occupied if 2–25 wolves were detected by ≥3 hunters. Overall, estimates of the occupancy and distribution of wolf packs were generally consistent with known distributions. Our predictions of the total area occupied increased from 2007 to 2009 and predicted numbers of wolf packs were approximately 1.34–1.46 times the MFWP minimum counts for each year of the survey. Our results indicate that multi-season occupancy models based on public sightings can be used to monitor populations and changes in the spatial distribution of territorial carnivores across large areas where alternative methods may be limited by personnel, time, accessibility, and budget constraints.

  9. q-Gaussian distributions and multiplicative stochastic processes for analysis of multiple financial time series

    NASA Astrophysics Data System (ADS)

    Sato, Aki-Hiro

    2010-12-01

    This study considers q-Gaussian distributions and stochastic differential equations with both multiplicative and additive noises. In the M-dimensional case a q-Gaussian distribution can be theoretically derived as a stationary probability distribution of the multiplicative stochastic differential equation with both mutually independent multiplicative and additive noises. By using the proposed stochastic differential equation a method to evaluate a default probability under a given risk buffer is proposed.

  10. Probability evolution method for exit location distribution

    NASA Astrophysics Data System (ADS)

    Zhu, Jinjie; Chen, Zhen; Liu, Xianbin

    2018-03-01

    The exit problem in the framework of the large deviation theory has been a hot topic in the past few decades. The most probable escape path in the weak-noise limit has been clarified by the Freidlin-Wentzell action functional. However, noise in real physical systems cannot be arbitrarily small while noise with finite strength may induce nontrivial phenomena, such as noise-induced shift and noise-induced saddle-point avoidance. Traditional Monte Carlo simulation of noise-induced escape will take exponentially large time as noise approaches zero. The majority of the time is wasted on the uninteresting wandering around the attractors. In this paper, a new method is proposed to decrease the escape simulation time by an exponentially large factor by introducing a series of interfaces and by applying the reinjection on them. This method can be used to calculate the exit location distribution. It is verified by examining two classical examples and is compared with theoretical predictions. The results show that the method performs well for weak noise while may induce certain deviations for large noise. Finally, some possible ways to improve our method are discussed.

  11. Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains.

    PubMed

    Reich, W; Scheuermann, G

    2012-12-01

    Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.

  12. Distributed Optimization of Multi-Agent Systems: Framework, Local Optimizer, and Applications

    NASA Astrophysics Data System (ADS)

    Zu, Yue

    Convex optimization problem can be solved in a centralized or distributed manner. Compared with centralized methods based on single-agent system, distributed algorithms rely on multi-agent systems with information exchanging among connected neighbors, which leads to great improvement on the system fault tolerance. Thus, a task within multi-agent system can be completed with presence of partial agent failures. By problem decomposition, a large-scale problem can be divided into a set of small-scale sub-problems that can be solved in sequence/parallel. Hence, the computational complexity is greatly reduced by distributed algorithm in multi-agent system. Moreover, distributed algorithm allows data collected and stored in a distributed fashion, which successfully overcomes the drawbacks of using multicast due to the bandwidth limitation. Distributed algorithm has been applied in solving a variety of real-world problems. Our research focuses on the framework and local optimizer design in practical engineering applications. In the first one, we propose a multi-sensor and multi-agent scheme for spatial motion estimation of a rigid body. Estimation performance is improved in terms of accuracy and convergence speed. Second, we develop a cyber-physical system and implement distributed computation devices to optimize the in-building evacuation path when hazard occurs. The proposed Bellman-Ford Dual-Subgradient path planning method relieves the congestion in corridor and the exit areas. At last, highway traffic flow is managed by adjusting speed limits to minimize the fuel consumption and travel time in the third project. Optimal control strategy is designed through both centralized and distributed algorithm based on convex problem formulation. Moreover, a hybrid control scheme is presented for highway network travel time minimization. Compared with no controlled case or conventional highway traffic control strategy, the proposed hybrid control strategy greatly reduces total travel time on test highway network.

  13. [Computerized monitoring for integrated cervical screening. Rationale, methods and indicators of participation].

    PubMed

    Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A

    2003-02-01

    This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.

  14. Assessment of air velocity sensors for use in animal produciton facilities

    USDA-ARS?s Scientific Manuscript database

    Ventilation is an integral part of thermal environment control in animal production facilities. Accurately measuring the air velocity distribution within these facilities is cumbersome using the traverse method and a distributed velocity measurement system would reduce the time necessary to perform ...

  15. The joint time-frequency spectrogram structure of heptanes boilover noise

    NASA Astrophysics Data System (ADS)

    Xu, Qiang

    2006-04-01

    An experiment was conducted to study the noise characteristics in the boilover phenomena. The boilover occurs in the combustion of a liquid fuel floating on water. It will cause a sharp increase in burning rate and external radiation. Explosive burning of the fuel would cause potential safety consequence. Combustion noise accompanies the development of fire and displays different characteristics in typical period. These characteristics can be used to predict the start time of boilover. The acoustic signal in boilover procedure during the combustion of heptanes-water mixture is obtained in a set of experiments. Joint time-frequency analysis (JTFA) method is applied in the treatment of noise data. Several JTFA algorithms were used in the evaluation. These algorithms include Gabor, adaptive spectrogram, cone shape distribution, choi-williams distribution, Wigner-Ville Distribution, and Short Time Fourier Transform with different windows such as rectangular, Blackman, Hamming and Hanning. Time-frequency distribution patterns of the combustion noise are obtained, and they are compared with others from jet flow and small plastic bubble blow up.

  16. Concurrent changes in aggregation and swelling of coal particles in solvents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishioka, M.

    1995-12-31

    A new method of coal swelling has been developed tinder the condition of low coal concentrations with continuous mixing of coal and solvent. The change in particle size distributions by a laser scattering procedure was used for the evaluation of coal swelling. Particle size distributions in good and poor solvents were nearly equal, but reversibly changed in good solvents from time to time. The effects of solubles and coal concentrations on the distributions were small. It was concluded that aggregate d coal particles disaggregate in good solvents, and that an increase in the particle size distribution due to swelling inmore » good solvents are compensated by a decrease in the particle size due to disaggregation. Therefore, the behavior of coal particles in solvents is controlled by aggregation in addition to coal swelling. This implies that an increase in the particle size due to coal swelling in actual processes is not so large as expected by the results obtained from the conventional coal swelling methods.« less

  17. Inhalation of expiratory droplets in aircraft cabins.

    PubMed

    Gupta, J K; Lin, C-H; Chen, Q

    2011-08-01

    Airliner cabins have high occupant density and long exposure time, so the risk of airborne infection transmission could be high if one or more passengers are infected with an airborne infectious disease. The droplets exhaled by an infected passenger may contain infectious agents. This study developed a method to predict the amount of expiratory droplets inhaled by the passengers in an airliner cabin for any flight duration. The spatial and temporal distribution of expiratory droplets for the first 3 min after the exhalation from the index passenger was obtained using the computational fluid dynamics simulations. The perfectly mixed model was used for beyond 3 min after the exhalation. For multiple exhalations, the droplet concentration in a zone can be obtained by adding the droplet concentrations for all the exhalations until the current time with a time shift via the superposition method. These methods were used to determine the amount of droplets inhaled by the susceptible passengers over a 4-h flight under three common scenarios. The method, if coupled with information on the viability and the amount of infectious agent in the droplet, can aid in evaluating the infection risk. The distribution of the infectious agents contained in the expiratory droplets of an infected occupant in an indoor environment is transient and non-uniform. The risk of infection can thus vary with time and space. The investigations developed methods to predict the spatial and temporal distribution of expiratory droplets, and the inhalation of these droplets in an aircraft cabin. The methods can be used in other indoor environments to assess the relative risk of infection in different zones, and suitable measures to control the spread of infection can be adopted. Appropriate treatment can be implemented for the zone identified as high-risk zones. © 2011 John Wiley & Sons A/S.

  18. Methods for Combining Payload Parameter Variations with Input Environment

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.; Straayer, J. W.

    1975-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occuring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular value of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the methods are also presented.

  19. An objective method to determine the probability distribution of the minimum apparent age of a sample of radio-isotopic dates

    NASA Astrophysics Data System (ADS)

    Ickert, R. B.; Mundil, R.

    2012-12-01

    Dateable minerals (especially zircon U-Pb) that crystallized at high temperatures but have been redeposited, pose both unique opportunities and challenges for geochronology. Although they have the potential to provide useful information on the depositional age of their host rocks, their relationship to the host is not always well constrained. For example, primary volcanic deposits will often have a lag time (time between eruption and deposition) that is smaller than can be resolved using radiometric techniques, and the age of eruption and of deposition will be coincident within uncertainty. Alternatively, ordinary clastic sedimentary rocks will usually have a long and variable lag time, even for the youngest minerals. Intermediate cases, for example moderately reworked volcanogenic material, will have a short, but unknown lag time. A compounding problem with U-Pb zircon is that the residence time of crystals in their host magma chamber (time between crystallization and eruption) can be high and is variable, even within the products of a single eruption. In cases where the lag and/or residence time suspected to be large relative to the precision of the date, a common objective is to determine the minimum age of a sample of dates, in order to constrain the maximum age of the deposition of the host rock. However, both the extraction of that age as well as assignment of a meaningful uncertainty is not straightforward. A number of ad hoc techniques have been employed in the literature, which may be appropriate for particular data sets or specific problems, but may yield biased or misleading results. Ludwig (2012) has developed an objective, statistically justified method for the determination of the distribution of the minimum age, but it has not been widely adopted. Here we extend this algorithm with a bootstrap (which can show the effect - if any - of the sampling distribution itself). This method has a number of desirable characteristics: It can incorporate all data points while being resistant to outliers, it utilizes the measurement uncertainties, and it does not require the assumption that any given cluster of data represents a single geological event. In brief, the technique generates a synthetic distribution from the input data by resampling with replacement (a bootstrap). Each resample is a random selection from a Gaussian distribution defined by the mean and uncertainty of the data point. For this distribution, the minimum value is calculated. This procedure is repeated many times (>1000) and a distribution of minimum values is generated, from which a confidence interval can be constructed. We demonstrate the application of this technique using natural and synthetic datasets, show the advantages and limitations, and relate it to other methods. We emphasize that this estimate remains strictly a minimum age - as with any other estimate that does not explicitly incorporate lag or residence time, it will not reflect a depositional age if the lag/residence time is larger than the uncertainty of the estimate. We recommend that this or similar techniques be considered by geochronologists. Ludwig, K.R., 2012. Isoplot 3.75, A geochronological toolkit for Microsoft Excel; Berkeley Geochronology Center Special Publication no. 5

  20. A High-Throughput Approach for Identification of Nontuberculous Mycobacteria in Drinking Water Reveals Relationship between Water Age and Mycobacterium avium

    PubMed Central

    Haig, Sarah-Jane; Kotlarz, Nadine; LiPuma, John J.

    2018-01-01

    ABSTRACT Nontuberculous mycobacteria (NTM) frequently detected in drinking water (DW) include species associated with human infections, as well as species rarely linked to disease. Methods for improved the recovery of NTM DNA and high-throughput identification of NTM are needed for risk assessment of NTM infection through DW exposure. In this study, different methods of recovering bacterial DNA from DW were compared, revealing that a phenol-chloroform DNA extraction method yielded two to four times as much total DNA and eight times as much NTM DNA as two commercial DNA extraction kits. This method, combined with high-throughput, single-molecule real-time sequencing of NTM rpoB genes, allowed the identification of NTM to the species, subspecies, and (in some cases) strain levels. This approach was applied to DW samples collected from 15 households serviced by a chloraminated distribution system, with homes located in areas representing short (<24 h) and long (>24 h) distribution system residence times. Multivariate statistical analysis revealed that greater water age (i.e., combined distribution system residence time and home plumbing stagnation time) was associated with a greater relative abundance of Mycobacterium avium subsp. avium, one of the most prevalent NTM causing infections in humans. DW from homes closer to the treatment plant (with a shorter water age) contained more diverse NTM species, including Mycobacterium abscessus and Mycobacterium chelonae. Overall, our approach allows NTM identification to the species and subspecies levels and can be used in future studies to assess the risk of waterborne infection by providing insight into the similarity between environmental and infection-associated NTM. PMID:29440575

  1. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed Central

    Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.

    2014-01-01

    Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792

  2. High-resolution, submicron particle size distribution analysis using gravitational-sweep sedimentation.

    PubMed Central

    Mächtle, W

    1999-01-01

    Sedimentation velocity is a powerful tool for the analysis of complex solutions of macromolecules. However, sample turbidity imposes an upper limit to the size of molecular complexes currently amenable to such analysis. Furthermore, the breadth of the particle size distribution, combined with possible variations in the density of different particles, makes it difficult to analyze extremely complex mixtures. These same problems are faced in the polymer industry, where dispersions of latices, pigments, lacquers, and emulsions must be characterized. There is a rich history of methods developed for the polymer industry finding use in the biochemical sciences. Two such methods are presented. These use analytical ultracentrifugation to determine the density and size distributions for submicron-sized particles. Both methods rely on Stokes' equations to estimate particle size and density, whereas turbidity, corrected using Mie's theory, provides the concentration measurement. The first method uses the sedimentation time in dispersion media of different densities to evaluate the particle density and size distribution. This method works provided the sample is chemically homogeneous. The second method splices together data gathered at different sample concentrations, thus permitting the high-resolution determination of the size distribution of particle diameters ranging from 10 to 3000 nm. By increasing the rotor speed exponentially from 0 to 40,000 rpm over a 1-h period, size distributions may be measured for extremely broadly distributed dispersions. Presented here is a short history of particle size distribution analysis using the ultracentrifuge, along with a description of the newest experimental methods. Several applications of the methods are provided that demonstrate the breadth of its utility, including extensions to samples containing nonspherical and chromophoric particles. PMID:9916040

  3. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  4. Imaging bio-distribution of a topically applied dermatological cream on minipig skin using fluorescence lifetime imaging microscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Alex, Aneesh; Chaney, Eric J.; Criley, Jennifer M.; Spillman, Darold R.; Hutchison, Phaedra B.; Li, Joanne; Marjanovic, Marina; Frey, Steve; Cook, Steven; Boppart, Stephen A.; Arp, Zane A.

    2017-02-01

    Currently there is a lack of in vivo techniques to evaluate the spatial bio-distribution of dermal drugs over time without the need to take multiple serial biopsies. To address this gap, we investigated the use of multi-photon optical imaging methods to non-invasively track drug distribution on miniature pig (Species: Sus scrofa, Strain: Göttingen) skin in vivo. Minipig skin is the standard comparative research model to human skin, and is anatomically and functionally similar. We employed fluorescence lifetime imaging microscopy (FLIM) to visualize the spatial distribution and residency time of a topically applied experimental dermatological cream. This was made possible by the endogenous fluorescent optical properties of the experimental drug (fluorescence lifetime > 3000 ps). Two different drug formulations were applied on 2 minipigs for 7 consecutive days, with the control creams applied on the contralateral side, followed by 7 days of post-application monitoring using a multi-modal optical imaging system (MPTflex-CARS, JenLab, Germany). FLIM images were obtained from the treated regions 24 hr post-application from day 1 to day 14 that allowed visualization of cellular and sub-cellular features associated with different dermal layers non-invasively to a depth of 200 µm. Five punch biopsies per animal were obtained from the corresponding treated regions between days 8 and 14 for bioanalytical analysis and comparison with results obtained using FLIM. In conclusion, utilization of non-invasive optical biopsy methods for dermal drug evaluation can provide true longitudinal monitoring of drug spatial distribution, remove sampling limitations, and be more time-efficient compared to traditional methods.

  5. Single-shot distributed Brillouin optical time domain analyzer.

    PubMed

    Fang, Jian; Xu, Pengbai; Dong, Yongkang; Shieh, William

    2017-06-26

    We demonstrate a novel single-shot distributed Brillouin optical time domain analyzer (SS-BOTDA). In our method, dual-polarization probe with orthogonal frequency-division multiplexing (OFDM) modulation is used to acquire the distributed Brillouin gain spectra, and coherent detection is used to enhance the signal-to-noise ratio (SNR) drastically. Distributed temperature sensing is demonstrated over a 1.08 km standard single-mode fiber (SSMF) with 20.48 m spatial resolution and 0.59 °C temperature accuracy. Neither frequency scanning, nor polarization scrambling, nor averaging is required in our scheme. All the data are obtained through only one-shot measurement, indicating that the sensing speed is only limited by the length of fiber.

  6. Methods for developing time-series climate surfaces to drive topographically distributed energy- and water-balance models

    USGS Publications Warehouse

    Susong, D.; Marks, D.; Garen, D.

    1999-01-01

    Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.Topographically distributed energy- and water-balance models can accurately simulate both the development and melting of a seasonal snowcover in the mountain basins. To do this they require time-series climate surfaces of air temperature, humidity, wind speed, precipitation, and solar and thermal radiation. If data are available, these parameters can be adequately estimated at time steps of one to three hours. Unfortunately, climate monitoring in mountain basins is very limited, and the full range of elevations and exposures that affect climate conditions, snow deposition, and melt is seldom sampled. Detailed time-series climate surfaces have been successfully developed using limited data and relatively simple methods. We present a synopsis of the tools and methods used to combine limited data with simple corrections for the topographic controls to generate high temporal resolution time-series images of these climate parameters. Methods used include simulations, elevational gradients, and detrended kriging. The generated climate surfaces are evaluated at points and spatially to determine if they are reasonable approximations of actual conditions. Recommendations are made for the addition of critical parameters and measurement sites into routine monitoring systems in mountain basins.

  7. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  8. Flight Dynamics and Control of a Morphing UAV: Bio inspired by Natural Fliers

    DTIC Science & Technology

    2017-02-17

    Approved for public release: distribution unlimited. IV Modelling and Sizing Tornado Vortex Lattice Method (VLM) was used for aerodynamic prediction... Tornado is a Vortex Lattice Method software programmed in MATLAB; it was selected due to its fast solving time and ability to be controlled through...custom MATLAB scripts. Tornado VLM models the wing as thin sheet of discrete vortices and computes the pressure and force distributions around the

  9. New Tools for Comparing Beliefs about the Timing of Recurrent Events with Climate Time Series Datasets

    NASA Astrophysics Data System (ADS)

    Stiller-Reeve, Mathew; Stephenson, David; Spengler, Thomas

    2017-04-01

    For climate services to be relevant and informative for users, scientific data definitions need to match users' perceptions or beliefs. This study proposes and tests novel yet simple methods to compare beliefs of timing of recurrent climatic events with empirical evidence from multiple historical time series. The methods are tested by applying them to the onset date of the monsoon in Bangladesh, where several scientific monsoon definitions can be applied, yielding different results for monsoon onset dates. It is a challenge to know which monsoon definition compares best with people's beliefs. Time series from eight different scientific monsoon definitions in six regions are compared with respondent beliefs from a previously completed survey concerning the monsoon onset. Beliefs about the timing of the monsoon onset are represented probabilistically for each respondent by constructing a probability mass function (PMF) from elicited responses about the earliest, normal, and latest dates for the event. A three-parameter circular modified triangular distribution (CMTD) is used to allow for the possibility (albeit small) of the onset at any time of the year. These distributions are then compared to the historical time series using two approaches: likelihood scores, and the mean and standard deviation of time series of dates simulated from each belief distribution. The methods proposed give the basis for further iterative discussion with decision-makers in the development of eventual climate services. This study uses Jessore, Bangladesh, as an example and finds that a rainfall definition, applying a 10 mm day-1 threshold to NCEP-NCAR reanalysis (Reanalysis-1) data, best matches the survey respondents' beliefs about monsoon onset.

  10. Matrix-assisted laser desorption/ionization mass spectrometry for the evaluation of the C-terminal lysine distribution of a recombinant monoclonal antibody.

    PubMed

    Lazar, Alexandru C; Kloczewiak, Marek A; Mazsaroff, Istvan

    2004-01-01

    Recombinant monoclonal antibodies produced using mammalian cell lines contain multiple chemical modifications. One specific modification resides on the C-terminus of the heavy chain. Enzymes inside the cell can cleave the C-terminal lysine from the heavy-chain molecules, and variants with and without C-terminal lysine can be produced. In order to fully characterize the protein, there is a need for analytical methods that are able to account for the different product variants. Conventional analytical methods used for the measurement of the distribution of the two different variants are based on chemical or enzymatic degradation of the protein followed by chromatographic separation of the degradation products. Chromatographic separations with gradient elution have long run times, and analyses of multiple samples are time-consuming. This paper reports development of a novel method for the determination of the relative amounts of the two C-terminal heavy-chain variants based on matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOFMS) measurements of the cyanogen bromide degraded recombinant monoclonal antibody products. The distribution of the variants is determined from the MALDI-TOF mass spectra by measuring the peak areas of the two C-terminal peptides. The assay was used for the assessment of the C-terminal lysine distribution in different development lots. The method was able to differentiate between the products obtained using the same cell line as well as between products obtained from different cell lines. Copyright 2004 John Wiley & Sons, Ltd.

  11. Stochastic nature of series of waiting times.

    PubMed

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H; Salehi, E; Behjat, E; Qorbani, M; Nezhad, M Khazaei; Zirak, M; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the "waiting times" series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  12. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    PubMed

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  13. Distributed Load Shedding over Directed Communication Networks with Time Delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Tao; Wu, Di

    When generation is insufficient to support all loads under emergencies, effective and efficient load shedding needs to be deployed in order to maintain the supply-demand balance. This paper presents a distributed load shedding algorithm, which makes efficient decision based on the discovered global information. In the global information discovery process, each load only communicates with its neighboring load via directed communication links possibly with arbitrarily large but bounded time varying communication delays. We propose a novel distributed information discovery algorithm based on ratio consensus. Simulation results are used to validate the proposed method.

  14. Pilot-in-the-Loop CFD Method Development

    DTIC Science & Technology

    2015-02-01

    expensive alternatives [1]. ALM represents the blades as a set of segments along with each blade axis and the ADM represents the entire rotor as...fine grid, Δx = 1.00 m Figure 4 – Time-averaged vertical velocity distributions on downwash and rotor disk plane for hybrid and loose coupling...cases with fine and coarse grid refinement levels. Figure 4 shows the time-averaged distributions of vertical velocities on both downwash and rotor disk

  15. Data distribution method of workflow in the cloud environment

    NASA Astrophysics Data System (ADS)

    Wang, Yong; Wu, Junjuan; Wang, Ying

    2017-08-01

    Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.

  16. Probability distribution functions for unit hydrographs with optimization using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Ghorbani, Mohammad Ali; Singh, Vijay P.; Sivakumar, Bellie; H. Kashani, Mahsa; Atre, Atul Arvind; Asadi, Hakimeh

    2017-05-01

    A unit hydrograph (UH) of a watershed may be viewed as the unit pulse response function of a linear system. In recent years, the use of probability distribution functions (pdfs) for determining a UH has received much attention. In this study, a nonlinear optimization model is developed to transmute a UH into a pdf. The potential of six popular pdfs, namely two-parameter gamma, two-parameter Gumbel, two-parameter log-normal, two-parameter normal, three-parameter Pearson distribution, and two-parameter Weibull is tested on data from the Lighvan catchment in Iran. The probability distribution parameters are determined using the nonlinear least squares optimization method in two ways: (1) optimization by programming in Mathematica; and (2) optimization by applying genetic algorithm. The results are compared with those obtained by the traditional linear least squares method. The results show comparable capability and performance of two nonlinear methods. The gamma and Pearson distributions are the most successful models in preserving the rising and recession limbs of the unit hydographs. The log-normal distribution has a high ability in predicting both the peak flow and time to peak of the unit hydrograph. The nonlinear optimization method does not outperform the linear least squares method in determining the UH (especially for excess rainfall of one pulse), but is comparable.

  17. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions

    PubMed Central

    Marinelli, Fabrizio; Faraldo-Gómez, José D.

    2015-01-01

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917

  18. An efficient distribution method for nonlinear transport problems in stochastic porous media

    NASA Astrophysics Data System (ADS)

    Ibrahima, F.; Tchelepi, H.; Meyer, D. W.

    2015-12-01

    Because geophysical data are inexorably sparse and incomplete, stochastic treatments of simulated responses are convenient to explore possible scenarios and assess risks in subsurface problems. In particular, understanding how uncertainties propagate in porous media with nonlinear two-phase flow is essential, yet challenging, in reservoir simulation and hydrology. We give a computationally efficient and numerically accurate method to estimate the one-point probability density (PDF) and cumulative distribution functions (CDF) of the water saturation for the stochastic Buckley-Leverett problem when the probability distributions of the permeability and porosity fields are available. The method draws inspiration from the streamline approach and expresses the distributions of interest essentially in terms of an analytically derived mapping and the distribution of the time of flight. In a large class of applications the latter can be estimated at low computational costs (even via conventional Monte Carlo). Once the water saturation distribution is determined, any one-point statistics thereof can be obtained, especially its average and standard deviation. Moreover, rarely available in other approaches, yet crucial information such as the probability of rare events and saturation quantiles (e.g. P10, P50 and P90) can be derived from the method. We provide various examples and comparisons with Monte Carlo simulations to illustrate the performance of the method.

  19. Temporal Methods to Detect Content-Based Anomalies in Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skryzalin, Jacek; Field, Jr., Richard; Fisher, Andrew N.

    Here, we develop a method for time-dependent topic tracking and meme trending in social media. Our objective is to identify time periods whose content differs signifcantly from normal, and we utilize two techniques to do so. The first is an information-theoretic analysis of the distributions of terms emitted during different periods of time. In the second, we cluster documents from each time period and analyze the tightness of each clustering. We also discuss a method of combining the scores created by each technique, and we provide ample empirical analysis of our methodology on various Twitter datasets.

  20. A Method for Obtaining High Frequency, Global, IR-Based Convective Cloud Tops for Studies of the TTL

    NASA Technical Reports Server (NTRS)

    Pfister, Leonhard; Ueyama, Rei; Jensen, Eric; Schoeberl, Mark

    2017-01-01

    Models of varying complexity that simulate water vapor and clouds in the Tropical Tropopause Layer (TTL) show that including convection directly is essential to properly simulating the water vapor and cloud distribution. In boreal winter, for example, simulations without convection yield a water vapor distribution that is too uniform with longitude, as well as minimal cloud distributions. Two things are important for convective simulations. First, it is important to get the convective cloud top potential temperature correctly, since unrealistically high values (reaching above the cold point tropopause too frequently) will cause excessive hydration of the stratosphere. Second, one must capture the time variation as well, since hydration by convection depends on the local relative humidity (temperature), which has substantial variation on synoptic time scales in the TTL. This paper describes a method for obtaining high frequency (3-hourly) global convective cloud top distributions which can be used in trajectory models. The method uses rainfall thresholds, standard IR brightness temperatures, meteorological temperature analyses, and physically realistic and documented corrections IR brightness temperature corrections to derive cloud top altitudes and potential temperatures. The cloud top altitudes compare well with combined CLOUDSAT and CALIPSO data, both in time-averaged overall vertical and horizontal distributions and in individual cases (correlations of .65-.7). An important finding is that there is significant uncertainty (nearly .5 km) in evaluating the statistical distribution of convective cloud tops even using lidar. Deep convection whose tops are in regions of high relative humidity (such as much of the TTL), will cause clouds to form above the actual convection. It is often difficult to distinguish these clouds from the actual convective cloud due to the uncertainties of evaluating ice water content from lidar measurements. Comparison with models show that calculated cloud top altitudes are generally higher than those calculated by global analyses (e.g., MERRA). Interannual variability in the distribution of convective cloud top altitudes is also investigated.

  1. Time-varying higher order spectra

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; O'Shea, Peter

    1991-12-01

    A general solution for the problem of time-frequency signal representation of nonlinear FM signals is provided, based on a generalization of the Wigner-Ville distribution. The Wigner- Ville distribution (WVD) is a second order time-frequency representation. That is, it is able to give ideal energy concentration for quadratic phase signals and its ensemble average is a second order time-varying spectrum. The same holds for Cohen's class of time-frequency distributions, which are smoothed versions of the WVD. The WVD may be extended so as to achieve ideal energy concentration for higher order phase laws, and such that the expectation is a time-varying higher order spectrum. The usefulness of these generalized Wigner-Ville distributions (GWVD) is twofold. Firstly, because they achieve ideal energy concentration for polynomial phase signals, they may be used for optimal instantaneous frequency estimation. Second, they are useful for discriminating between nonstationary processes of differing higher order moments. In the same way that the WVD is generalized, we generalize Cohen's class of TFDs by defining a class of generalized time-frequency distributions (GTFDs) obtained by a two dimensional smoothing of the GWVD. Another results derived from this approach is a method based on higher order spectra which allows the separation of cross-terms and auto- terms in the WVD.

  2. A semi-parametric within-subject mixture approach to the analyses of responses and response times.

    PubMed

    Molenaar, Dylan; Bolsinova, Maria; Vermunt, Jeroen K

    2018-05-01

    In item response theory, modelling the item response times in addition to the item responses may improve the detection of possible between- and within-subject differences in the process that resulted in the responses. For instance, if respondents rely on rapid guessing on some items but not on all, the joint distribution of the responses and response times will be a multivariate within-subject mixture distribution. Suitable parametric methods to detect these within-subject differences have been proposed. In these approaches, a distribution needs to be assumed for the within-class response times. In this paper, it is demonstrated that these parametric within-subject approaches may produce false positives and biased parameter estimates if the assumption concerning the response time distribution is violated. A semi-parametric approach is proposed which resorts to categorized response times. This approach is shown to hardly produce false positives and parameter bias. In addition, the semi-parametric approach results in approximately the same power as the parametric approach. © 2017 The British Psychological Society.

  3. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  4. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  5. Using Residence Time Distributions (RTDs) to Address the Traceability of Raw Materials in Continuous Pharmaceutical Manufacturing.

    PubMed

    Engisch, William; Muzzio, Fernando

    Continuous processing in pharmaceutical manufacturing is a relatively new approach that has generated significant attention. While it has been used for decades in other industries, showing significant advantages, the pharmaceutical industry has been slow in its adoption of continuous processing, primarily due to regulatory uncertainty. This paper aims to help address these concerns by introducing methods for batch definition, raw material traceability, and sensor frequency determination. All of the methods are based on established engineering and mathematical principles, especially the residence time distribution (RTD). This paper introduces a risk-based approach to address content uniformity challenges of continuous manufacturing. All of the detailed methods are discussed using a direct compaction manufacturing line as the main example, but the techniques can easily be applied to other continuous manufacturing methods such as wet and dry granulation, hot melt extrusion, capsule filling, etc.

  6. Fast radiative transfer models for retrieval of cloud properties in the back-scattering region: application to DSCOVR-EPIC sensor

    NASA Astrophysics Data System (ADS)

    Molina Garcia, Victor; Sasi, Sruthy; Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego

    2017-04-01

    In this work, the requirements for the retrieval of cloud properties in the back-scattering region are described, and their application to the measurements taken by the Earth Polychromatic Imaging Camera (EPIC) on board the Deep Space Climate Observatory (DSCOVR) is shown. Various radiative transfer models and their linearizations are implemented, and their advantages and issues are analyzed. As radiative transfer calculations in the back-scattering region are computationally time-consuming, several acceleration techniques are also studied. The radiative transfer models analyzed include the exact Discrete Ordinate method with Matrix Exponential (DOME), the Matrix Operator method with Matrix Exponential (MOME), and the approximate asymptotic and equivalent Lambertian cloud models. To reduce the computational cost of the line-by-line (LBL) calculations, the k-distribution method, the Principal Component Analysis (PCA) and a combination of the k-distribution method plus PCA are used. The linearized radiative transfer models for retrieval of cloud properties include the Linearized Discrete Ordinate method with Matrix Exponential (LDOME), the Linearized Matrix Operator method with Matrix Exponential (LMOME) and the Forward-Adjoint Discrete Ordinate method with Matrix Exponential (FADOME). These models were applied to the EPIC oxygen-A band absorption channel at 764 nm. It is shown that the approximate asymptotic and equivalent Lambertian cloud models give inaccurate results, so an offline processor for the retrieval of cloud properties in the back-scattering region requires the use of exact models such as DOME and MOME, which behave similarly. The combination of the k-distribution method plus PCA presents similar accuracy to the LBL calculations, but it is up to 360 times faster, and the relative errors for the computed radiances are less than 1.5% compared to the results when the exact phase function is used. Finally, the linearized models studied show similar behavior, with relative errors less than 1% for the radiance derivatives, but FADOME is 2 times faster than LDOME and 2.5 times faster than LMOME.

  7. The RATIO method for time-resolved Laue crystallography

    PubMed Central

    Coppens, Philip; Pitak, Mateusz; Gembicky, Milan; Messerschmidt, Marc; Scheins, Stephan; Benedict, Jason; Adachi, Shin-ichi; Sato, Tokushi; Nozawa, Shunsuke; Ichiyanagi, Kohei; Chollet, Matthieu; Koshihara, Shin-ya

    2009-01-01

    A RATIO method for analysis of intensity changes in time-resolved pump–probe Laue diffraction experiments is described. The method eliminates the need for scaling the data with a wavelength curve representing the spectral distribution of the source and removes the effect of possible anisotropic absorption. It does not require relative scaling of series of frames and removes errors due to all but very short term fluctuations in the synchrotron beam. PMID:19240334

  8. Delineating Facies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics with Level Set Transformation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hammond, Glenn Edward; Song, Xuehang; Ye, Ming

    A new approach is developed to delineate the spatial distribution of discrete facies (geological units that have unique distributions of hydraulic, physical, and/or chemical properties) conditioned not only on direct data (measurements directly related to facies properties, e.g., grain size distribution obtained from borehole samples) but also on indirect data (observations indirectly related to facies distribution, e.g., hydraulic head and tracer concentration). Our method integrates for the first time ensemble data assimilation with traditional transition probability-based geostatistics. The concept of level set is introduced to build shape parameterization that allows transformation between discrete facies indicators and continuous random variables. Themore » spatial structure of different facies is simulated by indicator models using conditioning points selected adaptively during the iterative process of data assimilation. To evaluate the new method, a two-dimensional semi-synthetic example is designed to estimate the spatial distribution and permeability of two distinct facies from transient head data induced by pumping tests. The example demonstrates that our new method adequately captures the spatial pattern of facies distribution by imposing spatial continuity through conditioning points. The new method also reproduces the overall response in hydraulic head field with better accuracy compared to data assimilation with no constraints on spatial continuity on facies.« less

  9. Stochastic nature of series of waiting times

    NASA Astrophysics Data System (ADS)

    Anvari, Mehrnaz; Aghamohammadi, Cina; Dashti-Naserabadi, H.; Salehi, E.; Behjat, E.; Qorbani, M.; Khazaei Nezhad, M.; Zirak, M.; Hadjihosseini, Ali; Peinke, Joachim; Tabar, M. Reza Rahimi

    2013-06-01

    Although fluctuations in the waiting time series have been studied for a long time, some important issues such as its long-range memory and its stochastic features in the presence of nonstationarity have so far remained unstudied. Here we find that the “waiting times” series for a given increment level have long-range correlations with Hurst exponents belonging to the interval 1/2

  10. Individual cell lag time distributions of Cronobacter (Enterobacter sakazakii) and impact of pooling samples on its detection in powdered infant formula.

    PubMed

    Miled, Rabeb Bennour; Guillier, Laurent; Neves, Sandra; Augustin, Jean-Christophe; Colin, Pierre; Besse, Nathalie Gnanou

    2011-06-01

    Cells of six strains of Cronobacter were subjected to dry stress and stored for 2.5 months at ambient temperature. The individual cell lag time distributions of recovered cells were characterized at 25 °C and 37 °C in non-selective broth. The individual cell lag times were deduced from the times taken by cultures from individual cells to reach an optical density threshold. In parallel, growth curves for each strain at high contamination levels were determined in the same growth conditions. In general, the extreme value type II distribution with a shape parameter fixed to 5 (EVIIb) was the most effective at describing the 12 observed distributions of individual cell lag times. Recently, a model for characterizing individual cell lag time distribution from population growth parameters was developed for other food-borne pathogenic bacteria such as Listeria monocytogenes. We confirmed this model's applicability to Cronobacter by comparing the mean and the standard deviation of individual cell lag times to populational lag times observed with high initial concentration experiments. We also validated the model in realistic conditions by studying growth in powdered infant formula decimally diluted in Buffered Peptone Water, which represents the first enrichment step of the standard detection method for Cronobacter. Individual lag times and the pooling of samples significantly affect detection performances. Copyright © 2010 Elsevier Ltd. All rights reserved.

  11. Time-dependent landslide probability mapping

    USGS Publications Warehouse

    Campbell, Russell H.; Bernknopf, Richard L.; ,

    1993-01-01

    Case studies where time of failure is known for rainfall-triggered debris flows can be used to estimate the parameters of a hazard model in which the probability of failure is a function of time. As an example, a time-dependent function for the conditional probability of a soil slip is estimated from independent variables representing hillside morphology, approximations of material properties, and the duration and rate of rainfall. If probabilities are calculated in a GIS (geomorphic information system ) environment, the spatial distribution of the result for any given hour can be displayed on a map. Although the probability levels in this example are uncalibrated, the method offers a potential for evaluating different physical models and different earth-science variables by comparing the map distribution of predicted probabilities with inventory maps for different areas and different storms. If linked with spatial and temporal socio-economic variables, this method could be used for short-term risk assessment.

  12. Monte Carlo Method for Determining Earthquake Recurrence Parameters from Short Paleoseismic Catalogs: Example Calculations for California

    USGS Publications Warehouse

    Parsons, Tom

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques [e.g., Ellsworth et al., 1999]. In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means [e.g., NIST/SEMATECH, 2006]. For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDF?s, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  13. Monte Carlo method for determining earthquake recurrence parameters from short paleoseismic catalogs: Example calculations for California

    USGS Publications Warehouse

    Parsons, T.

    2008-01-01

    Paleoearthquake observations often lack enough events at a given site to directly define a probability density function (PDF) for earthquake recurrence. Sites with fewer than 10-15 intervals do not provide enough information to reliably determine the shape of the PDF using standard maximum-likelihood techniques (e.g., Ellsworth et al., 1999). In this paper I present a method that attempts to fit wide ranges of distribution parameters to short paleoseismic series. From repeated Monte Carlo draws, it becomes possible to quantitatively estimate most likely recurrence PDF parameters, and a ranked distribution of parameters is returned that can be used to assess uncertainties in hazard calculations. In tests on short synthetic earthquake series, the method gives results that cluster around the mean of the input distribution, whereas maximum likelihood methods return the sample means (e.g., NIST/SEMATECH, 2006). For short series (fewer than 10 intervals), sample means tend to reflect the median of an asymmetric recurrence distribution, possibly leading to an overestimate of the hazard should they be used in probability calculations. Therefore a Monte Carlo approach may be useful for assessing recurrence from limited paleoearthquake records. Further, the degree of functional dependence among parameters like mean recurrence interval and coefficient of variation can be established. The method is described for use with time-independent and time-dependent PDFs, and results from 19 paleoseismic sequences on strike-slip faults throughout the state of California are given.

  14. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    PubMed

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  15. A physically based catchment partitioning method for hydrological analysis

    NASA Astrophysics Data System (ADS)

    Menduni, Giovanni; Riboni, Vittoria

    2000-07-01

    We propose a partitioning method for the topographic surface, which is particularly suitable for hydrological distributed modelling and shallow-landslide distributed modelling. The model provides variable mesh size and appears to be a natural evolution of contour-based digital terrain models. The proposed method allows the drainage network to be derived from the contour lines. The single channels are calculated via a search for the steepest downslope lines. Then, for each network node, the contributing area is determined by means of a search for both steepest upslope and downslope lines. This leads to the basin being partitioned into physically based finite elements delimited by irregular polygons. In particular, the distributed computation of local geomorphological parameters (i.e. aspect, average slope and elevation, main stream length, concentration time, etc.) can be performed easily for each single element. The contributing area system, together with the information on the distribution of geomorphological parameters provide a useful tool for distributed hydrological modelling and simulation of environmental processes such as erosion, sediment transport and shallow landslides.

  16. Double density dynamics: realizing a joint distribution of a physical system and a parameter system

    NASA Astrophysics Data System (ADS)

    Fukuda, Ikuo; Moritsugu, Kei

    2015-11-01

    To perform a variety of types of molecular dynamics simulations, we created a deterministic method termed ‘double density dynamics’ (DDD), which realizes an arbitrary distribution for both physical variables and their associated parameters simultaneously. Specifically, we constructed an ordinary differential equation that has an invariant density relating to a joint distribution of the physical system and the parameter system. A generalized density function leads to a physical system that develops under nonequilibrium environment-describing superstatistics. The joint distribution density of the physical system and the parameter system appears as the Radon-Nikodym derivative of a distribution that is created by a scaled long-time average, generated from the flow of the differential equation under an ergodic assumption. The general mathematical framework is fully discussed to address the theoretical possibility of our method, and a numerical example representing a 1D harmonic oscillator is provided to validate the method being applied to the temperature parameters.

  17. Response of spatial vegetation distribution in China to climate changes since the Last Glacial Maximum (LGM)

    PubMed Central

    Wang, Siyang; Xu, Xiaoting; Shrestha, Nawal; Zimmermann, Niklaus E.; Tang, Zhiyao; Wang, Zhiheng

    2017-01-01

    Analyzing how climate change affects vegetation distribution is one of the central issues of global change ecology as this has important implications for the carbon budget of terrestrial vegetation. Mapping vegetation distribution under historical climate scenarios is essential for understanding the response of vegetation distribution to future climatic changes. The reconstructions of palaeovegetation based on pollen data provide a useful method to understand the relationship between climate and vegetation distribution. However, this method is limited in time and space. Here, using species distribution model (SDM) approaches, we explored the climatic determinants of contemporary vegetation distribution and reconstructed the distribution of Chinese vegetation during the Last Glacial Maximum (LGM, 18,000 14C yr BP) and Middle-Holocene (MH, 6000 14C yr BP). The dynamics of vegetation distribution since the LGM reconstructed by SDMs were largely consistent with those based on pollen data, suggesting that the SDM approach is a useful tool for studying historical vegetation dynamics and its response to climate change across time and space. Comparison between the modeled contemporary potential natural vegetation distribution and the observed contemporary distribution suggests that temperate deciduous forests, subtropical evergreen broadleaf forests, temperate deciduous shrublands and temperate steppe have low range fillings and are strongly influenced by human activities. In general, the Tibetan Plateau, North and Northeast China, and the areas near the 30°N in Central and Southeast China appeared to have experienced the highest turnover in vegetation due to climate change from the LGM to the present. PMID:28426780

  18. Variability of the occurrence frequency of solar flares as a function of peak hard X-ray rate

    NASA Technical Reports Server (NTRS)

    Bai, T.

    1993-01-01

    We study the occurrence frequency of solar flares as a function of the hard X-ray peak count rate, using observations of the Solar Maximum Mission. The size distributions are well represented by power-law distributions with negative indices. As a better alternative to the conventional method, we devise a maximum likelihood method of determining the power-law index of the size distribution. We find that the power-law index of the size distribution changes with time and with the phase of the 154-day periodicity. The size distribution is steeper during the maximum years of solar cycle 21 (1980 and 1981) than during the declining phase (1982-1984). The size distribution, however, is flatter during the maximum phase of the 154-day periodicity than during the minimum phase. The implications of these findings are discussed.

  19. Spatiotemporal reconstruction of list-mode PET data.

    PubMed

    Nichols, Thomas E; Qi, Jinyi; Asma, Evren; Leahy, Richard M

    2002-04-01

    We describe a method for computing a continuous time estimate of tracer density using list-mode positron emission tomography data. The rate function in each voxel is modeled as an inhomogeneous Poisson process whose rate function can be represented using a cubic B-spline basis. The rate functions are estimated by maximizing the likelihood of the arrival times of detected photon pairs over the control vertices of the spline, modified by quadratic spatial and temporal smoothness penalties and a penalty term to enforce nonnegativity. Randoms rate functions are estimated by assuming independence between the spatial and temporal randoms distributions. Similarly, scatter rate functions are estimated by assuming spatiotemporal independence and that the temporal distribution of the scatter is proportional to the temporal distribution of the trues. A quantitative evaluation was performed using simulated data and the method is also demonstrated in a human study using 11C-raclopride.

  20. Physics-driven Spatiotemporal Regularization for High-dimensional Predictive Modeling: A Novel Approach to Solve the Inverse ECG Problem

    NASA Astrophysics Data System (ADS)

    Yao, Bing; Yang, Hui

    2016-12-01

    This paper presents a novel physics-driven spatiotemporal regularization (STRE) method for high-dimensional predictive modeling in complex healthcare systems. This model not only captures the physics-based interrelationship between time-varying explanatory and response variables that are distributed in the space, but also addresses the spatial and temporal regularizations to improve the prediction performance. The STRE model is implemented to predict the time-varying distribution of electric potentials on the heart surface based on the electrocardiogram (ECG) data from the distributed sensor network placed on the body surface. The model performance is evaluated and validated in both a simulated two-sphere geometry and a realistic torso-heart geometry. Experimental results show that the STRE model significantly outperforms other regularization models that are widely used in current practice such as Tikhonov zero-order, Tikhonov first-order and L1 first-order regularization methods.

  1. Dynamic ADMM for Real-Time Optimal Power Flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearization of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of noncontrollable resources. Optimality and convergence of the proposed algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  2. Dynamic ADMM for Real-Time Optimal Power Flow: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhang, Yijian; Hong, Mingyi

    This paper considers distribution networks featuring distributed energy resources (DERs), and develops a dynamic optimization method to maximize given operational objectives in real time while adhering to relevant network constraints. The design of the dynamic algorithm is based on suitable linearizations of the AC power flow equations, and it leverages the so-called alternating direction method of multipliers (ADMM). The steps of the ADMM, however, are suitably modified to accommodate appropriate measurements from the distribution network and the DERs. With the aid of these measurements, the resultant algorithm can enforce given operational constraints in spite of inaccuracies in the representation ofmore » the AC power flows, and it avoids ubiquitous metering to gather the state of non-controllable resources. Optimality and convergence of the propose algorithm are established in terms of tracking of the solution of a convex surrogate of the AC optimal power flow problem.« less

  3. Importance-sampling computation of statistical properties of coupled oscillators

    NASA Astrophysics Data System (ADS)

    Gupta, Shamik; Leitão, Jorge C.; Altmann, Eduardo G.

    2017-07-01

    We introduce and implement an importance-sampling Monte Carlo algorithm to study systems of globally coupled oscillators. Our computational method efficiently obtains estimates of the tails of the distribution of various measures of dynamical trajectories corresponding to states occurring with (exponentially) small probabilities. We demonstrate the general validity of our results by applying the method to two contrasting cases: the driven-dissipative Kuramoto model, a paradigm in the study of spontaneous synchronization; and the conservative Hamiltonian mean-field model, a prototypical system of long-range interactions. We present results for the distribution of the finite-time Lyapunov exponent and a time-averaged order parameter. Among other features, our results show most notably that the distributions exhibit a vanishing standard deviation but a skewness that is increasing in magnitude with the number of oscillators, implying that nontrivial asymmetries and states yielding rare or atypical values of the observables persist even for a large number of oscillators.

  4. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  5. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed

    Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A

    2011-07-01

    There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Deuteron spin-lattice relaxation in the presence of an activation energy distribution: application to methanols in zeolite NaX.

    PubMed

    Stoch, G; Ylinen, E E; Birczynski, A; Lalowicz, Z T; Góra-Marek, K; Punkkinen, M

    2013-02-01

    A new method is introduced for analyzing deuteron spin-lattice relaxation in molecular systems with a broad distribution of activation energies and correlation times. In such samples the magnetization recovery is strongly non-exponential but can be fitted quite accurately by three exponentials. The considered system may consist of molecular groups with different mobility. For each group a Gaussian distribution of the activation energy is introduced. By assuming for every subsystem three parameters: the mean activation energy E(0), the distribution width σ and the pre-exponential factor τ(0) for the Arrhenius equation defining the correlation time, the relaxation rate is calculated for every part of the distribution. Experiment-based limiting values allow the grouping of the rates into three classes. For each class the relaxation rate and weight is calculated and compared with experiment. The parameters E(0), σ and τ(0) are determined iteratively by repeating the whole cycle many times. The temperature dependence of the deuteron relaxation was observed in three samples containing CD(3)OH (200% and 100% loading) and CD(3)OD (200%) in NaX zeolite and analyzed by the described method between 20K and 170K. The obtained parameters, equal for all the three samples, characterize the methyl and hydroxyl mobilities of the methanol molecules at two different locations. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Properties and determinants of codon decoding time distributions

    PubMed Central

    2014-01-01

    Background Codon decoding time is a fundamental property of mRNA translation believed to affect the abundance, function, and properties of proteins. Recently, a novel experimental technology--ribosome profiling--was developed to measure the density, and thus the speed, of ribosomes at codon resolution. Specifically, this method is based on next-generation sequencing, which theoretically can provide footprint counts that correspond to the probability of observing a ribosome in this position for each nucleotide in each transcript. Results In this study, we report for the first time various novel properties of the distribution of codon footprint counts in five organisms, based on large-scale analysis of ribosomal profiling data. We show that codons have distinctive footprint count distributions. These tend to be preserved along the inner part of the ORF, but differ at the 5' and 3' ends of the ORF, suggesting that the translation-elongation stage actually includes three biophysical sub-steps. In addition, we study various basic properties of the codon footprint count distributions and show that some of them correlate with the abundance of the tRNA molecule types recognizing them. Conclusions Our approach emphasizes the advantages of analyzing ribosome profiling and similar types of data via a comparative genomic codon-distribution-centric view. Thus, our methods can be used in future studies related to translation and even transcription elongation. PMID:25572668

  8. The new PARIOTM device for determining continuous particle-size distributions of soils and sediments.

    NASA Astrophysics Data System (ADS)

    Miller, Alina; Pertassek, Thomas; Steins, Andreas; Durner, Wolfgang; Göttlein, Axel; Petrik, Wolfgang; von Unold, Georg

    2017-04-01

    The particle-size distribution (PSD) is a key property of soils. The reference method for determining the PSD is based on gravitational sedimentation of particles in an initially homogeneous suspension. Traditional methods measure manually (i) the uplift of a floating body in the suspension at different times (Hydrometer method) or (ii) the mass of solids in extracted suspension aliquots at predefined sampling depths and times (Pipette method). Both methods lead to a disturbance of the sedimentation process and provide only discrete data of the PSD. Durner et al. (2017) recently developed a new automated method to determine particle-size distributions of soils and sediments from gravitational sedimentation (Durner, W., S.C. Iden, and G. von Unold: The integral suspension pressure method (ISP) for precise particle-size analysis by gravitational sedimentation, Water Resources Research, doi:10.1002/2016WR019830, 2017). The so-called integral suspension method (ISP) method estimates continuous PSD's from sedimentation experiments by recording the temporal evolution of the suspension pressure at a certain measurement depth in a sedimentation cylinder. It requires no manual interaction after start and thus no specialized training of the lab personnel and avoids any disturbance of the sedimentation process. The required technology to perform these experiments was developed by the UMS company, Munich and is now available as an instrument called PARIO, traded by the METER Group. In this poster, the basic functioning of PARIO is shown and key components and parameters of the technology are explained.

  9. Determining prescription durations based on the parametric waiting time distribution.

    PubMed

    Støvring, Henrik; Pottegård, Anton; Hallas, Jesper

    2016-12-01

    The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Advances in measuring techniques for turbine cooling test rigs

    NASA Technical Reports Server (NTRS)

    Pollack, F. G.

    1972-01-01

    Surface temperature distribution measurements for turbine vanes and blades were obtained by measuring the infrared energy emitted by the airfoil. The IR distribution can be related to temperature distribution by suitable calibration methods and the data presented in the form of isotherm maps. Both IR photographic and real time electro-optical methods are being investigated. The methods can be adapted to rotating as well as stationary targets, and both methods can utilize computer processing. Pressure measurements on rotating components are made with a rotating system incorporating 10 miniature transducers. A mercury wetted slip ring assembly was used to supply excitation power and as a signal transfer device. The system was successfully tested up to speeds of 9000 rpm and is now being adapted to measure rotating blade airflow quantities in a spin rig and a research engine.

  11. Airborne agent concentration analysis

    DOEpatents

    Gelbard, Fred

    2004-02-03

    A method and system for inferring airborne contaminant concentrations in rooms without contaminant sensors, based on data collected by contaminant sensors in other rooms of a building, using known airflow interconnectivity data. The method solves a least squares problem that minimizes the difference between measured and predicted contaminant sensor concentrations with respect to an unknown contaminant release time. Solutions are constrained to providing non-negative initial contaminant concentrations in all rooms. The method can be used to identify a near-optimal distribution of sensors within the building, when then number of available sensors is less than the total number of rooms. This is achieved by having a system-sensor matrix that is non-singular, and by selecting that distribution which yields the lowest condition number of all the distributions considered. The method can predict one or more contaminant initial release points from the collected data.

  12. Demonstration of fundamental statistics by studying timing of electronics signals in a physics-based laboratory

    NASA Astrophysics Data System (ADS)

    Beach, Shaun E.; Semkow, Thomas M.; Remling, David J.; Bradt, Clayton J.

    2017-07-01

    We have developed accessible methods to demonstrate fundamental statistics in several phenomena, in the context of teaching electronic signal processing in a physics-based college-level curriculum. A relationship between the exponential time-interval distribution and Poisson counting distribution for a Markov process with constant rate is derived in a novel way and demonstrated using nuclear counting. Negative binomial statistics is demonstrated as a model for overdispersion and justified by the effect of electronic noise in nuclear counting. The statistics of digital packets on a computer network are shown to be compatible with the fractal-point stochastic process leading to a power-law as well as generalized inverse Gaussian density distributions of time intervals between packets.

  13. Beyond Hosting Capacity: Using Shortest Path Methods to Minimize Upgrade Cost Pathways: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gensollen, Nicolas; Horowitz, Kelsey A; Palmintier, Bryan S

    We present in this paper a graph based forwardlooking algorithm applied to distribution planning in the context of distributed PV penetration. We study the target hosting capacity (THC) problem where the objective is to find the cheapest sequence of system upgrades to reach a predefined hosting capacity target value. We show in this paper that commonly used short-term cost minimization approaches lead most of the time to suboptimal solutions. By comparing our method against such myopic techniques on real distribution systems, we show that our algorithm is able to reduce the overall integration costs by looking at future decisions. Becausemore » hosting capacity is hard to compute, this problem requires efficient methods to search the space. We demonstrate here that heuristics using domain specific knowledge can be efficiently used to improve the algorithm performance such that real distribution systems can be studied.« less

  14. Detection of the Dinozoans Pfiesteria piscicida and P. shumwayae: a review of detection methods and geographic distribution.

    PubMed

    Rublee, Parke A; Remington, David L; Schaefer, Eric F; Marshall, Michael M

    2005-01-01

    Molecular methods, including conventional PCR, real-time PCR, denaturing gradient gel electrophoresis, fluorescent fragment detection PCR, and fluorescent in situ hybridization, have all been developed for use in identifying and studying the distribution of the toxic dinoflagellates Pfiesteria piscicida and P. shumwayae. Application of the methods has demonstrated a worldwide distribution of both species and provided insight into their environmental tolerance range and temporal changes in distribution. Genetic variability among geographic locations generally appears low in rDNA genes, and detection of the organisms in ballast water is consistent with rapid dispersal or high gene flow among populations, but additional sequence data are needed to verify this hypothesis. The rapid development and application of these tools serves as a model for study of other microbial taxa and provides a basis for future development of tools that can simultaneously detect multiple targets.

  15. [Study of inversion and classification of particle size distribution under dependent model algorithm].

    PubMed

    Sun, Xiao-Gang; Tang, Hong; Yuan, Gui-Bin

    2008-05-01

    For the total light scattering particle sizing technique, an inversion and classification method was proposed with the dependent model algorithm. The measured particle system was inversed simultaneously by different particle distribution functions whose mathematic model was known in advance, and then classified according to the inversion errors. The simulation experiments illustrated that it is feasible to use the inversion errors to determine the particle size distribution. The particle size distribution function was obtained accurately at only three wavelengths in the visible light range with the genetic algorithm, and the inversion results were steady and reliable, which decreased the number of multi wavelengths to the greatest extent and increased the selectivity of light source. The single peak distribution inversion error was less than 5% and the bimodal distribution inversion error was less than 10% when 5% stochastic noise was put in the transmission extinction measurement values at two wavelengths. The running time of this method was less than 2 s. The method has advantages of simplicity, rapidity, and suitability for on-line particle size measurement.

  16. A simplified implementation of edge detection in MATLAB is faster and more sensitive than fast fourier transform for actin fiber alignment quantification.

    PubMed

    Kemeny, Steven Frank; Clyne, Alisa Morss

    2011-04-01

    Fiber alignment plays a critical role in the structure and function of cells and tissues. While fiber alignment quantification is important to experimental analysis and several different methods for quantifying fiber alignment exist, many studies focus on qualitative rather than quantitative analysis perhaps due to the complexity of current fiber alignment methods. Speed and sensitivity were compared in edge detection and fast Fourier transform (FFT) for measuring actin fiber alignment in cells exposed to shear stress. While edge detection using matrix multiplication was consistently more sensitive than FFT, image processing time was significantly longer. However, when MATLAB functions were used to implement edge detection, MATLAB's efficient element-by-element calculations and fast filtering techniques reduced computation cost 100 times compared to the matrix multiplication edge detection method. The new computation time was comparable to the FFT method, and MATLAB edge detection produced well-distributed fiber angle distributions that statistically distinguished aligned and unaligned fibers in half as many sample images. When the FFT sensitivity was improved by dividing images into smaller subsections, processing time grew larger than the time required for MATLAB edge detection. Implementation of edge detection in MATLAB is simpler, faster, and more sensitive than FFT for fiber alignment quantification.

  17. iGLASS: An Improvement to the GLASS Method for Estimating Species Trees from Gene Trees

    PubMed Central

    Rosenberg, Noah A.

    2012-01-01

    Abstract Several methods have been designed to infer species trees from gene trees while taking into account gene tree/species tree discordance. Although some of these methods provide consistent species tree topology estimates under a standard model, most either do not estimate branch lengths or are computationally slow. An exception, the GLASS method of Mossel and Roch, is consistent for the species tree topology, estimates branch lengths, and is computationally fast. However, GLASS systematically overestimates divergence times, leading to biased estimates of species tree branch lengths. By assuming a multispecies coalescent model in which multiple lineages are sampled from each of two taxa at L independent loci, we derive the distribution of the waiting time until the first interspecific coalescence occurs between the two taxa, considering all loci and measuring from the divergence time. We then use the mean of this distribution to derive a correction to the GLASS estimator of pairwise divergence times. We show that our improved estimator, which we call iGLASS, consistently estimates the divergence time between a pair of taxa as the number of loci approaches infinity, and that it is an unbiased estimator of divergence times when one lineage is sampled per taxon. We also show that many commonly used clustering methods can be combined with the iGLASS estimator of pairwise divergence times to produce a consistent estimator of the species tree topology. Through simulations, we show that iGLASS can greatly reduce the bias and mean squared error in obtaining estimates of divergence times in a species tree. PMID:22216756

  18. Robust low-frequency spread-spectrum navigation system

    DOEpatents

    Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN

    2012-01-03

    Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.

  19. Robust low-frequency spread-spectrum navigation system

    DOEpatents

    Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN

    2011-01-25

    Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.

  20. Robust low-frequency spread-spectrum navigation system

    DOEpatents

    Smith, Stephen F; Moore, James A

    2012-10-30

    Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.

  1. Robust low-frequency spread-spectrum navigation system

    DOEpatents

    Smith, Stephen F [Loudon, TN; Moore, James A [Powell, TN

    2009-12-01

    Methods and apparatus are described for a navigation system. A process includes providing a plurality of transmitters distributed throughout a desired coverage area; locking the plurality of transmitters to a common timing reference; transmitting a signal from each of the plurality of transmitters. An apparatus includes a plurality of transmitters distributed throughout a desired coverage area; wherein each of the plurality of transmitters comprises a packet generator; and wherein the plurality of transmitters are locked to a common timing reference.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theuws, P.G.A.; Beijerinck, H.C.W.; Schram, D.C.

    Velocity analysis of the molecular beam is done with a time-of-flight method. The measured velocity distribution of the fast neutral atoms is described by the sum of two Maxwell-Boltzmann distributions with temperatures on the order of 0.25 and 1 eV, respectively. This bimodal distribution is attributed to an overpopulation of the high-energy tail of the ion velocity distribution. The measured intensities of the fast neutrals vary between 5 x 10/sup 14/ and 7 x 10/sup 15/ (molecules s/sup -1/ sr/sup -1/).

  3. Incremental isometric embedding of high-dimensional data using connected neighborhood graphs.

    PubMed

    Zhao, Dongfang; Yang, Li

    2009-01-01

    Most nonlinear data embedding methods use bottom-up approaches for capturing the underlying structure of data distributed on a manifold in high dimensional space. These methods often share the first step which defines neighbor points of every data point by building a connected neighborhood graph so that all data points can be embedded to a single coordinate system. These methods are required to work incrementally for dimensionality reduction in many applications. Because input data stream may be under-sampled or skewed from time to time, building connected neighborhood graph is crucial to the success of incremental data embedding using these methods. This paper presents algorithms for updating $k$-edge-connected and $k$-connected neighborhood graphs after a new data point is added or an old data point is deleted. It further utilizes a simple algorithm for updating all-pair shortest distances on the neighborhood graph. Together with incremental classical multidimensional scaling using iterative subspace approximation, this paper devises an incremental version of Isomap with enhancements to deal with under-sampled or unevenly distributed data. Experiments on both synthetic and real-world data sets show that the algorithm is efficient and maintains low dimensional configurations of high dimensional data under various data distributions.

  4. A Hybrid Method for Accelerated Simulation of Coulomb Collisions in a Plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caflisch, R; Wang, C; Dimarco, G

    2007-10-09

    If the collisional time scale for Coulomb collisions is comparable to the characteristic time scales for a plasma, then simulation of Coulomb collisions may be important for computation of kinetic plasma dynamics. This can be a computational bottleneck because of the large number of simulated particles and collisions (or phase-space resolution requirements in continuum algorithms), as well as the wide range of collision rates over the velocity distribution function. This paper considers Monte Carlo simulation of Coulomb collisions using the binary collision models of Takizuka & Abe and Nanbu. It presents a hybrid method for accelerating the computation of Coulombmore » collisions. The hybrid method represents the velocity distribution function as a combination of a thermal component (a Maxwellian distribution) and a kinetic component (a set of discrete particles). Collisions between particles from the thermal component preserve the Maxwellian; collisions between particles from the kinetic component are performed using the method of or Nanbu. Collisions between the kinetic and thermal components are performed by sampling a particle from the thermal component and selecting a particle from the kinetic component. Particles are also transferred between the two components according to thermalization and dethermalization probabilities, which are functions of phase space.« less

  5. Communication: Time- and space-sliced velocity map electron imaging

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Fan, Lin; Winney, Alexander H.; Li, Wen

    2014-12-01

    We develop a new method to achieve slice electron imaging using a conventional velocity map imaging apparatus with two additional components: a fast frame complementary metal-oxide semiconductor camera and a high-speed digitizer. The setup was previously shown to be capable of 3D detection and coincidence measurements of ions. Here, we show that when this method is applied to electron imaging, a time slice of 32 ps and a spatial slice of less than 1 mm thick can be achieved. Each slice directly extracts 3D velocity distributions of electrons and provides electron velocity distributions that are impossible or difficult to obtain with a standard 2D imaging electron detector.

  6. Measurement of snow depth distribution in the Kamikochi-Azusa river basin using an airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Suzuki, K.; Sasaki, A.

    2013-12-01

    In the Japanese Alps region, large amounts of precipitation in the form of snow constitute a more important water resource than rain. During the winter, precipitation that is deposited as snowfall accumulates in the river basins, and it forms natural dams known as 'white dams.' A quantitative understanding of snow depth distribution in these mountainous areas is important not only for evaluating water resource volume, but also for understanding the effects of snow in terms of its impact on landforms and its effect on the distribution of vegetation. However, it is not easy to perform a quantitative evaluation of snow depth distribution in mountainous areas. Several methods have been proposed for clarifying snow depth distribution. The most widely used of these is a method of inserting a sounding rod into the snow to measure its depth at each geographic position. Another method is to dig a trench in the snow and then perform an observational measurement of the side of the trench. These methods enable accurate measurement of the snow depth; however, when the snow is several meters deep, the methods may be limited by the measuring capacity of the equipment, or by the time restrictions of the survey. For these reasons, wide area measurement of the spatial distribution of snow is very difficult, and it is not suitable for investigating snow depth distribution in river basins. There is a method of using ultrasonics or radar to measure the depth of snow and to make observations of snow depth at certain positions. This method offers high measurement precision and high time resolution at the observation points. However, for observations in areas of very deep snow, it becomes technically difficult to install the equipment, and it is difficult to make a large number of installations to cover a wide area. There are also methods of indirectly measuring snow depth. One of these is to use aerial photographs taken when there is no snow cover and when there is snow cover, draw contour lines, and then use the difference between them to clarify the snow depth. This method allows researchers to grasp the snow depth over a wide area, but it needs to be made more precise if it is to incorporate high-precision information on equivalent elevation points on the snow surface. In recent years, a measurement technology has been developed that uses laser scanners mounted on aircraft. This method enables researchers to obtain ground surface coordinate data with high precision over a wide area from the air. Using such a scanner to measure the ground surface during snow coverage and during no snow coverage, and then finding the differences between the surface elevations, has made it possible to ascertain snow depth with high precision. Airborne laser measurement enables high-precision measurements over a wide area and in a short amount of time, and measurements can be made regardless of geographical factors such as sloping ground. As such, it enables measurement of snow depth distribution over a wide area without having to worry about the undulations of the land. In this study, airborne laser scanning was carried out on the snow surface in the upstream region of the Kamikochi-Azusa River in Japan on March 29, 2012, in order to clarify the snow depth distribution.

  7. An equation-free probabilistic steady-state approximation: dynamic application to the stochastic simulation of biochemical reaction networks.

    PubMed

    Salis, Howard; Kaznessis, Yiannis N

    2005-12-01

    Stochastic chemical kinetics more accurately describes the dynamics of "small" chemical systems, such as biological cells. Many real systems contain dynamical stiffness, which causes the exact stochastic simulation algorithm or other kinetic Monte Carlo methods to spend the majority of their time executing frequently occurring reaction events. Previous methods have successfully applied a type of probabilistic steady-state approximation by deriving an evolution equation, such as the chemical master equation, for the relaxed fast dynamics and using the solution of that equation to determine the slow dynamics. However, because the solution of the chemical master equation is limited to small, carefully selected, or linear reaction networks, an alternate equation-free method would be highly useful. We present a probabilistic steady-state approximation that separates the time scales of an arbitrary reaction network, detects the convergence of a marginal distribution to a quasi-steady-state, directly samples the underlying distribution, and uses those samples to accurately predict the state of the system, including the effects of the slow dynamics, at future times. The numerical method produces an accurate solution of both the fast and slow reaction dynamics while, for stiff systems, reducing the computational time by orders of magnitude. The developed theory makes no approximations on the shape or form of the underlying steady-state distribution and only assumes that it is ergodic. We demonstrate the accuracy and efficiency of the method using multiple interesting examples, including a highly nonlinear protein-protein interaction network. The developed theory may be applied to any type of kinetic Monte Carlo simulation to more efficiently simulate dynamically stiff systems, including existing exact, approximate, or hybrid stochastic simulation techniques.

  8. Optimization and evaluation of asymmetric flow field-flow fractionation of silver nanoparticles.

    PubMed

    Loeschner, Katrin; Navratilova, Jana; Legros, Samuel; Wagner, Stephan; Grombe, Ringo; Snell, James; von der Kammer, Frank; Larsen, Erik H

    2013-01-11

    Asymmetric flow field-flow fractionation (AF(4)) in combination with on-line optical detection and mass spectrometry is one of the most promising methods for separation and quantification of nanoparticles (NPs) in complex matrices including food. However, to obtain meaningful results regarding especially the NP size distribution a number of parameters influencing the separation need to be optimized. This paper describes the development of a separation method for polyvinylpyrrolidone-stabilized silver nanoparticles (AgNPs) in aqueous suspension. Carrier liquid composition, membrane material, cross flow rate and spacer height were shown to have a significant influence on the recoveries and retention times of the nanoparticles. Focus time and focus flow rate were optimized with regard to minimum elution of AgNPs in the void volume. The developed method was successfully tested for injected masses of AgNPs from 0.2 to 5.0 μg. The on-line combination of AF(4) with detection methods including ICP-MS, light absorbance and light scattering was helpful because each detector provided different types of information about the eluting NP fraction. Differences in the time-resolved appearance of the signals obtained by the three detection methods were explained based on the physical origin of the signal. Two different approaches for conversion of retention times of AgNPs to their corresponding sizes and size distributions were tested and compared, namely size calibration with polystyrene nanoparticles (PSNPs) and calculations of size based on AF(4) theory. Fraction collection followed by transmission electron microscopy was performed to confirm the obtained size distributions and to obtain further information regarding the AgNP shape. Characteristics of the absorbance spectra were used to confirm the presence of non-spherical AgNP. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  10. An improved sampling method of complex network

    NASA Astrophysics Data System (ADS)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  11. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less

  12. A program for performing exact quantum dynamics calculations using cylindrical polar coordinates: A nanotube application

    NASA Astrophysics Data System (ADS)

    Skouteris, Dimitris; Gervasi, Osvaldo; Laganà, Antonio

    2009-03-01

    A program that uses the time-dependent wavepacket method to study the motion of structureless particles in a force field of quasi-cylindrical symmetry is presented here. The program utilises cylindrical polar coordinates to express the wavepacket, which is subsequently propagated using a Chebyshev expansion of the Schrödinger propagator. Time-dependent exit flux as well as energy-dependent S matrix elements can be obtained for all states of the particle (describing its angular momentum component along the nanotube axis and the excitation of the radial degree of freedom in the cylinder). The program has been used to study the motion of an H atom across a carbon nanotube. Program summaryProgram title: CYLWAVE Catalogue identifier: AECL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECL_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3673 No. of bytes in distributed program, including test data, etc.: 35 237 Distribution format: tar.gz Programming language: Fortran 77 Computer: RISC workstations Operating system: UNIX RAM: 120 MBytes Classification: 16.7, 16.10 External routines: SUNSOFT performance library (not essential) TFFT2D.F (Temperton Fast Fourier Transform), BESSJ.F (from Numerical Recipes, for the calculation of Bessel functions) (included in the distribution file). Nature of problem: Time evolution of the state of a structureless particle in a quasicylindrical potential. Solution method: Time dependent wavepacket propagation. Running time: 50000 secs. The test run supplied with the distribution takes about 10 minutes to complete.

  13. Quantification of the Barkhausen noise method for the evaluation of time-dependent degradation

    NASA Astrophysics Data System (ADS)

    Kim, Dong-Won; Kwon, Dongil

    2003-02-01

    The Barkhausen noise (BN) method has long been applied to measure the bulk magnetic properties of magnetic materials. Recently, this important nondestructive testing (NDT) method has been applied to evaluate microstructure, stress distribution analysis, fatigue, creep and fracture characteristics. Until now the BN method has been used only qualitatively in evaluating the variation of BN with variations in material properties. For this reason, few NDT methods have been applied in industrial plants and laboratories. The present investigation studied the coercive force and BN while varying the microstructure of ultrafine-grained steels and SA508 cl.3 steels. This variation was carried out according to the second heat-treatment condition with rolling of ultrafine-grained steels and the simulated time-dependent degradation of SA 508 cl.3 steels. An attempt was also made to quantify BN from the relationship between the velocity of magnetic domain walls and the retarding force, using the coercive force of the domain wall movement. The microstructure variation was analyzed according to time-dependent degradation. Fracture toughness was evaluated quantitatively by measuring the BN from two intermediary parameters; grain size and distribution of nonmagnetic particles. From these measurements, the variation of microstructure and fracture toughness can be directly evaluated by the BN method as an accurate in situ NDT method.

  14. Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method

    DTIC Science & Technology

    2009-01-01

    Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of

  15. Statistical analysis of hydrodynamic cavitation events

    NASA Astrophysics Data System (ADS)

    Gimenez, G.; Sommer, R.

    1980-10-01

    The frequency (number of events per unit time) of pressure pulses produced by hydrodynamic cavitation bubble collapses is investigated using statistical methods. The results indicate that this frequency is distributed according to a normal law, its parameters not being time-evolving.

  16. Toward a quantitative account of pitch distribution in spontaneous narrative: Method and validation

    PubMed Central

    Matteson, Samuel E.; Streit Olness, Gloria; Caplow, Nancy J.

    2013-01-01

    Pitch is well-known both to animate human discourse and to convey meaning in communication. The study of the statistical population distributions of pitch in discourse will undoubtedly benefit from methodological improvements. The current investigation examines a method that parameterizes pitch in discourse as musical pitch interval H measured in units of cents and that disaggregates the sequence of peak word-pitches using tools employed in time-series analysis and digital signal processing. The investigators test the proposed methodology by its application to distributions in pitch interval of the peak word-pitch (collectively called the discourse gamut) that occur in simulated and actual spontaneous emotive narratives obtained from 17 middle-aged African-American adults. The analysis, in rigorous tests, not only faithfully reproduced simulated distributions imbedded in realistic time series that drift and include pitch breaks, but the protocol also reveals that the empirical distributions exhibit a common hidden structure when normalized to a slowly varying mode (called the gamut root) of their respective probability density functions. Quantitative differences between narratives reveal the speakers' relative propensity for the use of pitch levels corresponding to elevated degrees of a discourse gamut (the “e-la”) superimposed upon a continuum that conforms systematically to an asymmetric Laplace distribution. PMID:23654400

  17. The orbital PDF: general inference of the gravitational potential from steady-state tracers

    NASA Astrophysics Data System (ADS)

    Han, Jiaxin; Wang, Wenting; Cole, Shaun; Frenk, Carlos S.

    2016-02-01

    We develop two general methods to infer the gravitational potential of a system using steady-state tracers, I.e. tracers with a time-independent phase-space distribution. Combined with the phase-space continuity equation, the time independence implies a universal orbital probability density function (oPDF) dP(λ|orbit) ∝ dt, where λ is the coordinate of the particle along the orbit. The oPDF is equivalent to Jeans theorem, and is the key physical ingredient behind most dynamical modelling of steady-state tracers. In the case of a spherical potential, we develop a likelihood estimator that fits analytical potentials to the system and a non-parametric method (`phase-mark') that reconstructs the potential profile, both assuming only the oPDF. The methods involve no extra assumptions about the tracer distribution function and can be applied to tracers with any arbitrary distribution of orbits, with possible extension to non-spherical potentials. The methods are tested on Monte Carlo samples of steady-state tracers in dark matter haloes to show that they are unbiased as well as efficient. A fully documented C/PYTHON code implementing our method is freely available at a GitHub repository linked from http://icc.dur.ac.uk/data/#oPDF.

  18. Frequency-domain multiscale quantum mechanics/electromagnetics simulation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung, E-mail: yamcy@yangtze.hku.hk, E-mail: ghc@everest.hku.hk

    A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 1190–1199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate themore » information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.« less

  19. The use of spectral methods in bidomain studies.

    PubMed

    Trayanova, N; Pilkington, T

    1992-01-01

    A Fourier transform method is developed for solving the bidomain coupled differential equations governing the intracellular and extracellular potentials on a finite sheet of cardiac cells undergoing stimulation. The spectral formulation converts the system of differential equations into a "diagonal" system of algebraic equations. Solving the algebraic equations directly and taking the inverse transform of the potentials proved numerically less expensive than solving the coupled differential equations by means of traditional numerical techniques, such as finite differences; the comparison between the computer execution times showed that the Fourier transform method was about 40 times faster than the finite difference method. By application of the Fourier transform method, transmembrane potential distributions in the two-dimensional myocardial slice were calculated. For a tissue characterized by a ratio of the intra- to extracellular conductivities that is different in all principal directions, the transmembrane potential distribution exhibits a rather complicated geometrical pattern. The influence of the different anisotropy ratios, the finite tissue size, and the stimuli configuration on the pattern of membrane polarization is investigated.

  20. Degradation data analysis based on a generalized Wiener process subject to measurement error

    NASA Astrophysics Data System (ADS)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  1. Computational system identification of continuous-time nonlinear systems using approximate Bayesian computation

    NASA Astrophysics Data System (ADS)

    Krishnanathan, Kirubhakaran; Anderson, Sean R.; Billings, Stephen A.; Kadirkamanathan, Visakan

    2016-11-01

    In this paper, we derive a system identification framework for continuous-time nonlinear systems, for the first time using a simulation-focused computational Bayesian approach. Simulation approaches to nonlinear system identification have been shown to outperform regression methods under certain conditions, such as non-persistently exciting inputs and fast-sampling. We use the approximate Bayesian computation (ABC) algorithm to perform simulation-based inference of model parameters. The framework has the following main advantages: (1) parameter distributions are intrinsically generated, giving the user a clear description of uncertainty, (2) the simulation approach avoids the difficult problem of estimating signal derivatives as is common with other continuous-time methods, and (3) as noted above, the simulation approach improves identification under conditions of non-persistently exciting inputs and fast-sampling. Term selection is performed by judging parameter significance using parameter distributions that are intrinsically generated as part of the ABC procedure. The results from a numerical example demonstrate that the method performs well in noisy scenarios, especially in comparison to competing techniques that rely on signal derivative estimation.

  2. An online detection system for aggregate sizes and shapes based on digital image processing

    NASA Astrophysics Data System (ADS)

    Yang, Jianhong; Chen, Sijia

    2017-02-01

    Traditional aggregate size measuring methods are time-consuming, taxing, and do not deliver online measurements. A new online detection system for determining aggregate size and shape based on a digital camera with a charge-coupled device, and subsequent digital image processing, have been developed to overcome these problems. The system captures images of aggregates while falling and flat lying. Using these data, the particle size and shape distribution can be obtained in real time. Here, we calibrate this method using standard globules. Our experiments show that the maximum particle size distribution error was only 3 wt%, while the maximum particle shape distribution error was only 2 wt% for data derived from falling aggregates, having good dispersion. In contrast, the data for flat-lying aggregates had a maximum particle size distribution error of 12 wt%, and a maximum particle shape distribution error of 10 wt%; their accuracy was clearly lower than for falling aggregates. However, they performed well for single-graded aggregates, and did not require a dispersion device. Our system is low-cost and easy to install. It can successfully achieve online detection of aggregate size and shape with good reliability, and it has great potential for aggregate quality assurance.

  3. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy.

    PubMed

    Shizgal, Bernie D

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988)JSTPBS0022-471510.1007/BF01016429].

  4. Kappa and other nonequilibrium distributions from the Fokker-Planck equation and the relationship to Tsallis entropy

    NASA Astrophysics Data System (ADS)

    Shizgal, Bernie D.

    2018-05-01

    This paper considers two nonequilibrium model systems described by linear Fokker-Planck equations for the time-dependent velocity distribution functions that yield steady state Kappa distributions for specific system parameters. The first system describes the time evolution of a charged test particle in a constant temperature heat bath of a second charged particle. The time dependence of the distribution function of the test particle is given by a Fokker-Planck equation with drift and diffusion coefficients for Coulomb collisions as well as a diffusion coefficient for wave-particle interactions. A second system involves the Fokker-Planck equation for electrons dilutely dispersed in a constant temperature heat bath of atoms or ions and subject to an external time-independent uniform electric field. The momentum transfer cross section for collisions between the two components is assumed to be a power law in reduced speed. The time-dependent Fokker-Planck equations for both model systems are solved with a numerical finite difference method and the approach to equilibrium is rationalized with the Kullback-Leibler relative entropy. For particular choices of the system parameters for both models, the steady distribution is found to be a Kappa distribution. Kappa distributions were introduced as an empirical fitting function that well describe the nonequilibrium features of the distribution functions of electrons and ions in space science as measured by satellite instruments. The calculation of the Kappa distribution from the Fokker-Planck equations provides a direct physically based dynamical approach in contrast to the nonextensive entropy formalism by Tsallis [J. Stat. Phys. 53, 479 (1988), 10.1007/BF01016429].

  5. Benefits of polidocanol endovenous microfoam (Varithena®) compared with physician-compounded foams.

    PubMed

    Carugo, Dario; Ankrett, Dyan N; Zhao, Xuefeng; Zhang, Xunli; Hill, Martyn; O'Byrne, Vincent; Hoad, James; Arif, Mehreen; Wright, David D I; Lewis, Andrew L

    2016-05-01

    To compare foam bubble size and bubble size distribution, stability, and degradation rate of commercially available polidocanol endovenous microfoam (Varithena®) and physician-compounded foams using a number of laboratory tests. Foam properties of polidocanol endovenous microfoam and physician-compounded foams were measured and compared using a glass-plate method and a Sympatec QICPIC image analysis method to measure bubble size and bubble size distribution, Turbiscan™ LAB for foam half time and drainage and a novel biomimetic vein model to measure foam stability. Physician-compounded foams composed of polidocanol and room air, CO2, or mixtures of oxygen and carbon dioxide (O2:CO2) were generated by different methods. Polidocanol endovenous microfoam was found to have a narrow bubble size distribution with no large (>500 µm) bubbles. Physician-compounded foams made with the Tessari method had broader bubble size distribution and large bubbles, which have an impact on foam stability. Polidocanol endovenous microfoam had a lower degradation rate than any physician-compounded foams, including foams made using room air (p < 0.035). The same result was obtained at different liquid to gas ratios (1:4 and 1:7) for physician-compounded foams. In all tests performed, CO2 foams were the least stable and different O2:CO2 mixtures had intermediate performance. In the biomimetic vein model, polidocanol endovenous microfoam had the slowest degradation rate and longest calculated dwell time, which represents the length of time the foam is in contact with the vein, almost twice that of physician-compounded foams using room air and eight times better than physician-compounded foams prepared using equivalent gas mixes. Bubble size, bubble size distribution and stability of various sclerosing foam formulations show that polidocanol endovenous microfoam results in better overall performance compared with physician-compounded foams. Polidocanol endovenous microfoam offers better stability and cohesive properties in a biomimetic vein model compared to physician-compounded foams. Polidocanol endovenous microfoam, which is indicated in the United States for treatment of great saphenous vein system incompetence, provides clinicians with a consistent product with enhanced handling properties. © The Author(s) 2015.

  6. Land processes lead to surprising patterns in atmospheric residence time

    NASA Astrophysics Data System (ADS)

    van der Ent, R.; Tuinenburg, O.

    2017-12-01

    Our research using atmospheric moisture tracking methods shows that the global average atmospheric residence time of evaporation is 8-10 days. This residence time appears to be Gamma distributed with a higher probability of shorter than average residence times and a long tail. As a consequence the median of this residence time is around 5 days. In some places in the world the first few hours/days after evaporation there seems to be a little chance for a moisture particle to precipitate again, which is reflected by a Gamma distribution having a shape parameter below 1. In this study we present global maps of this parameter using different datasets (GLDAS and ERA-Interim). The shape parameter is as such also a measure for the land-atmospheric coupling strength along the path of the atmospheric water particle. We also find that different evaporation components: canopy interception, soil evaporation and transpiration appear to have different residence time distributions. We find a daily cycle in the residence time distribution over land, which is not present over the oceans. In this paper we will show which of the evaporation components is mainly responsible for this daily pattern and thus exhibits the largest daily cycle of land-atmosphere coupling strength.

  7. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  8. Tracking Subpixel Targets with Critically Sampled Optical Sensors

    DTIC Science & Technology

    2012-09-01

    5 [32]. The Viterbi algorithm is a dynamic programming method for calculating the MAP in O(tn2) time . The most common use of this algorithm is in the... method to detect subpixel point targets using the sensor’s PSF as an identifying characteristic. Using matched filtering theory, a measure is defined to...ocean surface beneath the cloud will have a different distribution. While the basic methods will adapt to changes in cloud cover over time , it is also

  9. A parallel algorithm for the two-dimensional time fractional diffusion equation with implicit difference method.

    PubMed

    Gong, Chunye; Bao, Weimin; Tang, Guojian; Jiang, Yuewen; Liu, Jie

    2014-01-01

    It is very time consuming to solve fractional differential equations. The computational complexity of two-dimensional fractional differential equation (2D-TFDE) with iterative implicit finite difference method is O(M(x)M(y)N(2)). In this paper, we present a parallel algorithm for 2D-TFDE and give an in-depth discussion about this algorithm. A task distribution model and data layout with virtual boundary are designed for this parallel algorithm. The experimental results show that the parallel algorithm compares well with the exact solution. The parallel algorithm on single Intel Xeon X5540 CPU runs 3.16-4.17 times faster than the serial algorithm on single CPU core. The parallel efficiency of 81 processes is up to 88.24% compared with 9 processes on a distributed memory cluster system. We do think that the parallel computing technology will become a very basic method for the computational intensive fractional applications in the near future.

  10. Structural health monitoring of cylindrical bodies under impulsive hydrodynamic loading by distributed FBG strain measurements

    NASA Astrophysics Data System (ADS)

    Fanelli, Pierluigi; Biscarini, Chiara; Jannelli, Elio; Ubertini, Filippo; Ubertini, Stefano

    2017-02-01

    Various mechanical, ocean, aerospace and civil engineering problems involve solid bodies impacting the water surface and often result in complex coupled dynamics, characterized by impulsive loading conditions, high amplitude vibrations and large local deformations. Monitoring in such problems for purposes such as remaining fatigue life estimation and real time damage detection is a technical and scientific challenge of primary concern in this context. Open issues include the need for developing distributed sensing systems able to operate at very high acquisition frequencies, to be utilized to study rapidly varying strain fields, with high resolution and very low noise, while scientific challenges mostly relate to the definition of appropriate signal processing and modeling tools enabling the extraction of useful information from distributed sensing signals. Building on previous work by some of the authors, we propose an enhanced method for real time deformed shape reconstruction using distributed FBG strain measurements in curved bodies subjected to impulsive loading and we establish a new framework for applying this method for structural health monitoring purposes, as the main focus of the work. Experiments are carried out on a cylinder impacting the water at various speeds, proving improved performance in displacement reconstruction of the enhanced method compared to its previous version. A numerical study is then carried out considering the same physical problem with different delamination damages affecting the body. The potential for detecting, localizing and quantifying this damage using the reconstruction algorithm is thoroughly investigated. Overall, the results presented in the paper show the potential of distributed FBG strain measurements for real time structural health monitoring of curved bodies under impulsive hydrodynamic loading, defining damage sensitive features in terms of strain or displacement reconstruction errors at selected locations along the structure.

  11. Sequential data assimilation for a distributed hydrologic model considering different time scale of internal processes

    NASA Astrophysics Data System (ADS)

    Noh, S.; Tachikawa, Y.; Shiiba, M.; Kim, S.

    2011-12-01

    Applications of the sequential data assimilation methods have been increasing in hydrology to reduce uncertainty in the model prediction. In a distributed hydrologic model, there are many types of state variables and each variable interacts with each other based on different time scales. However, the framework to deal with the delayed response, which originates from different time scale of hydrologic processes, has not been thoroughly addressed in the hydrologic data assimilation. In this study, we propose the lagged filtering scheme to consider the lagged response of internal states in a distributed hydrologic model using two filtering schemes; particle filtering (PF) and ensemble Kalman filtering (EnKF). The EnKF is one of the widely used sub-optimal filters implementing an efficient computation with limited number of ensemble members, however, still based on Gaussian approximation. PF can be an alternative in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions involved. In case of PF, advanced particle regularization scheme is implemented together to preserve the diversity of the particle system. In case of EnKF, the ensemble square root filter (EnSRF) are implemented. Each filtering method is parallelized and implemented in the high performance computing system. A distributed hydrologic model, the water and energy transfer processes (WEP) model, is applied for the Katsura River catchment, Japan to demonstrate the applicability of proposed approaches. Forecasted results via PF and EnKF are compared and analyzed in terms of the prediction accuracy and the probabilistic adequacy. Discussions are focused on the prospects and limitations of each data assimilation method.

  12. Nondestructive Integrity Evaluation of PC Pile Using Wigner-Ville Distribution Method

    NASA Astrophysics Data System (ADS)

    Ni, Sheng-Huoo; Lo, Kuo-Feng; Huang, Yan-Hong

    Nondestructive evaluation (NDE) techniques have been used for years to provide a quality control of the construction for both drilled shafts and driven concrete piles. This trace is typically made up of transient pulses reflected from structural features of the pile or changes in its surrounding environment. It is often analyzed in conjunction with the spectral response, mobility curve, arrival time, etc. The Wigner-Ville Distribution is a new numerical analysis tool for signal process technique in the time-frequency domain and it can offer assistance and enhance signal characteristics for better resolution both easily and quickly. In this study, five single pre-cast concrete piles have been tested and evaluated by both sonic echo method and Wigner-Ville distribution (WVD). Furthermore, two difficult problems in nondestructive evaluation problems are discussed and solved: the first one is with a pile with slight defect, whose necking area percentage is less than 10%, and the other is a pile with multiple defects. The results show that WVD can not only recognize the characteristics easily, but also locate the defects more clearly than the traditional pile integrity testing method.

  13. Method for oil pipeline leak detection based on distributed fiber optic technology

    NASA Astrophysics Data System (ADS)

    Chen, Huabo; Tu, Yaqing; Luo, Ting

    1998-08-01

    Pipeline leak detection is a difficult problem to solve up to now. Some traditional leak detection methods have such problems as high rate of false alarm or missing detection, low location estimate capability. For the problems given above, a method for oil pipeline leak detection based on distributed optical fiber sensor with special coating is presented. The fiber's coating interacts with hydrocarbon molecules in oil, which alters the refractive indexed of the coating. Therefore the light-guiding properties of the fiber are modified. Thus pipeline leak location can be determined by OTDR. Oil pipeline lead detection system is designed based on the principle. The system has some features like real time, multi-point detection at the same time and high location accuracy. In the end, some factors that probably influence detection are analyzed and primary improving actions are given.

  14. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  15. Ocean Surface Topography Data Products and Tools

    NASA Technical Reports Server (NTRS)

    Case, Kelley E.; Bingham, Andrew W.; Berwin, Robert W.; Rigor, Eric M.; Raskin, Robert G.

    2004-01-01

    The Physical Oceanography Distributed Active Archiving Center (PO.DAAC), NASA's primary data center for archiving and distributing oceanographic data, is supporting the Jason and TOPEX/Poseidon satellite tandem missions by providing a variety of data products, tools, and distribution methods to the wider scientific and general community. PO.DAAC has developed several new data products for sea level residual measurements, providing a longterm climate data record from 1992 to the present These products provide compatible measurements of sea level residuals for the entire time series including the tandem TOPEX/Poseidon and Jason mission. Several data distribution tool. are available from NASA PO.DAAC. The Near-Real-Time Image Distribution Server (NEREIDS) provides quicklook browse images and binary data files The PO.DAAC Ocean ESIP Tool (POET) provides interactive, on-tine data subsetting and visualization for several altimetry data products.

  16. An asymptotic theory for cross-correlation between auto-correlated sequences and its application on neuroimaging data.

    PubMed

    Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng

    2018-04-20

    Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.

  17. Arrival time distributions of electrons in air showers with primary energies above 10 (18)eV observed at 900m above sea level

    NASA Technical Reports Server (NTRS)

    Kakimoto, F.; Tsuchimoto, I.; Enoki, T.; Suga, K.; Nishi, K.

    1985-01-01

    Detection of air showers with primary energies above 10 to the 19th power eV with sufficient statistics is extremely important in an astrophysical aspect related to the Greisen cut off and the origin of such high energy cosmic rays. Recently, a method is proposed to observe such giant air showers by measuring the arrival time distributions of air-shower particles at large core distances with a mini array. Experiments to measure the arrival time distributions of muons were started in 1981 and those of electrons in early 1983 in the Akeno air-shower array (930 gcm cm squared atmospheric depth, 900m above sea level). During the time of observation, the detection area of the Akeno array was expanded from 1 sq km to sq km in 1982 and to 20 sq km in 1984. Now the arrival time distribution of electrons and muons can be measured for showers with primary energies above 1019eV at large core distances.

  18. Size distributions of micro-bubbles generated by a pressurized dissolution method

    NASA Astrophysics Data System (ADS)

    Taya, C.; Maeda, Y.; Hosokawa, S.; Tomiyama, A.; Ito, Y.

    2012-03-01

    Size of micro-bubbles is widely distributed in the range of one to several hundreds micrometers and depends on generation methods, flow conditions and elapsed times after the bubble generation. Although a size distribution of micro-bubbles should be taken into account to improve accuracy in numerical simulations of flows with micro-bubbles, a variety of the size distribution makes it difficult to introduce the size distribution in the simulations. On the other hand, several models such as the Rosin-Rammler equation and the Nukiyama-Tanazawa equation have been proposed to represent the size distribution of particles or droplets. Applicability of these models to the size distribution of micro-bubbles has not been examined yet. In this study, we therefore measure size distribution of micro-bubbles generated by a pressurized dissolution method by using a phase Doppler anemometry (PDA), and investigate the applicability of the available models to the size distributions of micro-bubbles. Experimental apparatus consists of a pressurized tank in which air is dissolved in liquid under high pressure condition, a decompression nozzle in which micro-bubbles are generated due to pressure reduction, a rectangular duct and an upper tank. Experiments are conducted for several liquid volumetric fluxes in the decompression nozzle. Measurements are carried out at the downstream region of the decompression nozzle and in the upper tank. The experimental results indicate that (1) the Nukiyama-Tanasawa equation well represents the size distribution of micro-bubbles generated by the pressurized dissolution method, whereas the Rosin-Rammler equation fails in the representation, (2) the bubble size distribution of micro-bubbles can be evaluated by using the Nukiyama-Tanasawa equation without individual bubble diameters, when mean bubble diameter and skewness of the bubble distribution are given, and (3) an evaluation method of visibility based on the bubble size distribution and bubble number density is proposed, and the evaluated visibility agrees well with the visibility measured in the upper tank.

  19. A study of finite mixture model: Bayesian approach on financial time series data

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-07-01

    Recently, statistician have emphasized on the fitting finite mixture model by using Bayesian method. Finite mixture model is a mixture of distributions in modeling a statistical distribution meanwhile Bayesian method is a statistical method that use to fit the mixture model. Bayesian method is being used widely because it has asymptotic properties which provide remarkable result. In addition, Bayesian method also shows consistency characteristic which means the parameter estimates are close to the predictive distributions. In the present paper, the number of components for mixture model is studied by using Bayesian Information Criterion. Identify the number of component is important because it may lead to an invalid result. Later, the Bayesian method is utilized to fit the k-component mixture model in order to explore the relationship between rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia. Lastly, the results showed that there is a negative effect among rubber price and stock market price for all selected countries.

  20. Application of change-point problem to the detection of plant patches.

    PubMed

    López, I; Gámez, M; Garay, J; Standovár, T; Varga, Z

    2010-03-01

    In ecology, if the considered area or space is large, the spatial distribution of individuals of a given plant species is never homogeneous; plants form different patches. The homogeneity change in space or in time (in particular, the related change-point problem) is an important research subject in mathematical statistics. In the paper, for a given data system along a straight line, two areas are considered, where the data of each area come from different discrete distributions, with unknown parameters. In the paper a method is presented for the estimation of the distribution change-point between both areas and an estimate is given for the distributions separated by the obtained change-point. The solution of this problem will be based on the maximum likelihood method. Furthermore, based on an adaptation of the well-known bootstrap resampling, a method for the estimation of the so-called change-interval is also given. The latter approach is very general, since it not only applies in the case of the maximum-likelihood estimation of the change-point, but it can be also used starting from any other change-point estimation known in the ecological literature. The proposed model is validated against typical ecological situations, providing at the same time a verification of the applied algorithms.

  1. A review of recent advances in the spherical harmonics expansion method for semiconductor device simulation.

    PubMed

    Rupp, K; Jungemann, C; Hong, S-M; Bina, M; Grasser, T; Jüngel, A

    The Boltzmann transport equation is commonly considered to be the best semi-classical description of carrier transport in semiconductors, providing precise information about the distribution of carriers with respect to time (one dimension), location (three dimensions), and momentum (three dimensions). However, numerical solutions for the seven-dimensional carrier distribution functions are very demanding. The most common solution approach is the stochastic Monte Carlo method, because the gigabytes of memory requirements of deterministic direct solution approaches has not been available until recently. As a remedy, the higher accuracy provided by solutions of the Boltzmann transport equation is often exchanged for lower computational expense by using simpler models based on macroscopic quantities such as carrier density and mean carrier velocity. Recent developments for the deterministic spherical harmonics expansion method have reduced the computational cost for solving the Boltzmann transport equation, enabling the computation of carrier distribution functions even for spatially three-dimensional device simulations within minutes to hours. We summarize recent progress for the spherical harmonics expansion method and show that small currents, reasonable execution times, and rare events such as low-frequency noise, which are all hard or even impossible to simulate with the established Monte Carlo method, can be handled in a straight-forward manner. The applicability of the method for important practical applications is demonstrated for noise simulation, small-signal analysis, hot-carrier degradation, and avalanche breakdown.

  2. Estimated Accuracy of Three Common Trajectory Statistical Methods

    NASA Technical Reports Server (NTRS)

    Kabashnikov, Vitaliy P.; Chaikovsky, Anatoli P.; Kucsera, Tom L.; Metelskaya, Natalia S.

    2011-01-01

    Three well-known trajectory statistical methods (TSMs), namely concentration field (CF), concentration weighted trajectory (CWT), and potential source contribution function (PSCF) methods were tested using known sources and artificially generated data sets to determine the ability of TSMs to reproduce spatial distribution of the sources. In the works by other authors, the accuracy of the trajectory statistical methods was estimated for particular species and at specified receptor locations. We have obtained a more general statistical estimation of the accuracy of source reconstruction and have found optimum conditions to reconstruct source distributions of atmospheric trace substances. Only virtual pollutants of the primary type were considered. In real world experiments, TSMs are intended for application to a priori unknown sources. Therefore, the accuracy of TSMs has to be tested with all possible spatial distributions of sources. An ensemble of geographical distributions of virtual sources was generated. Spearman s rank order correlation coefficient between spatial distributions of the known virtual and the reconstructed sources was taken to be a quantitative measure of the accuracy. Statistical estimates of the mean correlation coefficient and a range of the most probable values of correlation coefficients were obtained. All the TSMs that were considered here showed similar close results. The maximum of the ratio of the mean correlation to the width of the correlation interval containing the most probable correlation values determines the optimum conditions for reconstruction. An optimal geographical domain roughly coincides with the area supplying most of the substance to the receptor. The optimal domain s size is dependent on the substance decay time. Under optimum reconstruction conditions, the mean correlation coefficients can reach 0.70 0.75. The boundaries of the interval with the most probable correlation values are 0.6 0.9 for the decay time of 240 h and 0.5 0.95 for the decay time of 12 h. The best results of source reconstruction can be expected for the trace substances with a decay time on the order of several days. Although the methods considered in this paper do not guarantee high accuracy they are computationally simple and fast. Using the TSMs in optimum conditions and taking into account the range of uncertainties, one can obtain a first hint on potential source areas.

  3. Effect of Soccer Foot Pressure on Pressure Distributions

    ERIC Educational Resources Information Center

    Uzun, Ahmet; Aydos, Latif; Kaya, Metin; Pekel, H. Ahmet; Kanatli, Ulunay

    2018-01-01

    The aim of this research is to investigate the effect of playing soccer for a long time in for professional soccer players regarding sole pressure. Despite the increasing prevalence of new methods developed in the footpad pressure measurement in recent years, our knowledge about pressure distribution of the footplate is still insufficient,…

  4. Influence of item distribution pattern and abundance on efficiency of benthic core sampling

    USGS Publications Warehouse

    Behney, Adam C.; O'Shaughnessy, Ryan; Eichholz, Michael W.; Stafford, Joshua D.

    2014-01-01

    ore sampling is a commonly used method to estimate benthic item density, but little information exists about factors influencing the accuracy and time-efficiency of this method. We simulated core sampling in a Geographic Information System framework by generating points (benthic items) and polygons (core samplers) to assess how sample size (number of core samples), core sampler size (cm2), distribution of benthic items, and item density affected the bias and precision of estimates of density, the detection probability of items, and the time-costs. When items were distributed randomly versus clumped, bias decreased and precision increased with increasing sample size and increased slightly with increasing core sampler size. Bias and precision were only affected by benthic item density at very low values (500–1,000 items/m2). Detection probability (the probability of capturing ≥ 1 item in a core sample if it is available for sampling) was substantially greater when items were distributed randomly as opposed to clumped. Taking more small diameter core samples was always more time-efficient than taking fewer large diameter samples. We are unable to present a single, optimal sample size, but provide information for researchers and managers to derive optimal sample sizes dependent on their research goals and environmental conditions.

  5. Effects of random tooth profile errors on the dynamic behaviors of planetary gears

    NASA Astrophysics Data System (ADS)

    Xun, Chao; Long, Xinhua; Hua, Hongxing

    2018-02-01

    In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.

  6. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    PubMed

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  7. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events

    PubMed Central

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate. PMID:28066225

  8. A method for computing ion energy distributions for multifrequency capacitive discharges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Alan C. F.; Lieberman, M. A.; Verboncoeur, J. P.

    2007-03-01

    The ion energy distribution (IED) at a surface is an important parameter for processing in multiple radio frequency driven capacitive discharges. An analytical model is developed for the IED in a low pressure discharge based on a linear transfer function that relates the time-varying sheath voltage to the time-varying ion energy response at the surface. This model is in good agreement with particle-in-cell simulations over a wide range of single, dual, and triple frequency driven capacitive discharge excitations.

  9. Distributed Optimization System

    DOEpatents

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  10. Reconstructing missing information on precipitation datasets: impact of tails on adopted statistical distributions.

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Beckie, Roger Daniel

    2014-05-01

    Missing data in hydrological time-series databases are ubiquitous in practical applications, yet it is of fundamental importance to make educated decisions in problems involving exhaustive time-series knowledge. This includes precipitation datasets, since recording or human failures can produce gaps in these time series. For some applications, directly involving the ratio between precipitation and some other quantity, lack of complete information can result in poor understanding of basic physical and chemical dynamics involving precipitated water. For instance, the ratio between precipitation (recharge) and outflow rates at a discharge point of an aquifer (e.g. rivers, pumping wells, lysimeters) can be used to obtain aquifer parameters and thus to constrain model-based predictions. We tested a suite of methodologies to reconstruct missing information in rainfall datasets. The goal was to obtain a suitable and versatile method to reduce the errors given by the lack of data in specific time windows. Our analyses included both a classical chronologically-pairing approach between rainfall stations and a probability-based approached, which accounted for the probability of exceedence of rain depths measured at two or multiple stations. Our analyses proved that it is not clear a priori which method delivers the best methodology. Rather, this selection should be based considering the specific statistical properties of the rainfall dataset. In this presentation, our emphasis is to discuss the effects of a few typical parametric distributions used to model the behavior of rainfall. Specifically, we analyzed the role of distributional "tails", which have an important control on the occurrence of extreme rainfall events. The latter strongly affect several hydrological applications, including recharge-discharge relationships. The heavy-tailed distributions we considered were parametric Log-Normal, Generalized Pareto, Generalized Extreme and Gamma distributions. The methods were first tested on synthetic examples, to have a complete control of the impact of several variables such as minimum amount of data required to obtain reliable statistical distributions from the selected parametric functions. Then, we applied the methodology to precipitation datasets collected in the Vancouver area and on a mining site in Peru.

  11. GPS FOM Chimney Analysis using Generalized Extreme Value Distribution

    NASA Technical Reports Server (NTRS)

    Ott, Rick; Frisbee, Joe; Saha, Kanan

    2004-01-01

    Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.

  12. Optical stabilization for time transfer infrastructure

    NASA Astrophysics Data System (ADS)

    Vojtech, Josef; Altmann, Michal; Skoda, Pavel; Horvath, Tomas; Slapak, Martin; Smotlacha, Vladimir; Havlis, Ondrej; Munster, Petr; Radil, Jan; Kundrat, Jan; Altmannova, Lada; Velc, Radek; Hula, Miloslav; Vohnout, Rudolf

    2017-08-01

    In this paper, we propose and present verification of all-optical methods for stabilization of the end-to-end delay of an optical fiber link. These methods are verified for deployment within infrastructure for accurate time and stable frequency distribution, based on sharing of fibers with research and educational network carrying live data traffic. Methods range from path length control, through temperature conditioning method to transmit wavelength control. Attention is given to achieve continuous control for relatively broad range of delays. We summarize design rules for delay stabilization based on the character and the total delay jitter.

  13. Temporal evolution of age data under transient pumping conditions

    NASA Astrophysics Data System (ADS)

    Leray, S.; de Dreuzy, J.-R.; Aquilina, L.; Vergnaud-Ayraud, V.; Labasque, T.; Bour, O.; Le Borgne, T.

    2014-04-01

    While most age data derived from tracers have been analyzed in steady-state flow conditions, we determine their temporal evolution when starting a pumping. Our study is based on a model made up of a shallowly dipping aquifer overlain by a less permeable aquitard characteristic of the crystalline aquifer of Plœmeur (Brittany, France). Under a pseudo transient flow assumption (instantaneous shift between two steady-state flow fields), we solve the transport equation with a backward particle-tracking method and determine the temporal evolution of the concentrations at the pumping well of CFC-11, CFC-12, CFC-113 and SF6. Apparent ages evolve because of the modifications of the flow pattern and because of the non-linear evolution of the tracer atmospheric concentrations. To identify the respective role of these two causes, we propose two successive analyses. We first convolute residence time distributions initially arising at different times at the same sampling time. We secondly convolute one residence time distribution at various sampling times. We show that flow pattern modifications control the apparent ages evolution in the first pumping year when the residence time distribution is modified from a piston-like distribution to a much broader distribution. In the first pumping year, the apparent age evolution contains transient information that can be used to better constrain hydrogeological systems and slightly compensate for the small number of tracers. Later, the residence time distribution hardly evolves and apparent ages only evolve because of the tracer atmospheric concentrations. In this phase, apparent age time-series do not reflect any evolution in the flow pattern.

  14. Multiple co-clustering based on nonparametric mixture models with heterogeneous marginal distributions

    PubMed Central

    Yoshimoto, Junichiro; Shimizu, Yu; Okada, Go; Takamura, Masahiro; Okamoto, Yasumasa; Yamawaki, Shigeto; Doya, Kenji

    2017-01-01

    We propose a novel method for multiple clustering, which is useful for analysis of high-dimensional data containing heterogeneous types of features. Our method is based on nonparametric Bayesian mixture models in which features are automatically partitioned (into views) for each clustering solution. This feature partition works as feature selection for a particular clustering solution, which screens out irrelevant features. To make our method applicable to high-dimensional data, a co-clustering structure is newly introduced for each view. Further, the outstanding novelty of our method is that we simultaneously model different distribution families, such as Gaussian, Poisson, and multinomial distributions in each cluster block, which widens areas of application to real data. We apply the proposed method to synthetic and real data, and show that our method outperforms other multiple clustering methods both in recovering true cluster structures and in computation time. Finally, we apply our method to a depression dataset with no true cluster structure available, from which useful inferences are drawn about possible clustering structures of the data. PMID:29049392

  15. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  16. Estimation Accuracy on Execution Time of Run-Time Tasks in a Heterogeneous Distributed Environment.

    PubMed

    Liu, Qi; Cai, Weidong; Jin, Dandan; Shen, Jian; Fu, Zhangjie; Liu, Xiaodong; Linge, Nigel

    2016-08-30

    Distributed Computing has achieved tremendous development since cloud computing was proposed in 2006, and played a vital role promoting rapid growth of data collecting and analysis models, e.g., Internet of things, Cyber-Physical Systems, Big Data Analytics, etc. Hadoop has become a data convergence platform for sensor networks. As one of the core components, MapReduce facilitates allocating, processing and mining of collected large-scale data, where speculative execution strategies help solve straggler problems. However, there is still no efficient solution for accurate estimation on execution time of run-time tasks, which can affect task allocation and distribution in MapReduce. In this paper, task execution data have been collected and employed for the estimation. A two-phase regression (TPR) method is proposed to predict the finishing time of each task accurately. Detailed data of each task have drawn interests with detailed analysis report being made. According to the results, the prediction accuracy of concurrent tasks' execution time can be improved, in particular for some regular jobs.

  17. Review of Vibration-Based Helicopters Health and Usage Monitoring Methods

    DTIC Science & Technology

    2001-04-05

    FM4, NA4, NA4*, NB4 and NB48* (Polyshchuk et al., 1998). The Wigner - Ville distribution ( WVD ) is a joint time-frequency signal analysis. The WVD is one...signal processing methodologies that are of relevance to vibration based damage detection (e.g., Wavelet Transform and Wigner - Ville distribution ) will be...operation cost, reduce maintenance flights, and increase flight safety. Key Words: HUMS; Wavelet Transform; Wigner - Ville distribution ; O&S; Machinery

  18. A general method for calculating three-dimensional compressible laminar and turbulent boundary layers on arbitrary wings

    NASA Technical Reports Server (NTRS)

    Cebeci, T.; Kaups, K.; Ramsey, J. A.

    1977-01-01

    The method described utilizes a nonorthogonal coordinate system for boundary-layer calculations. It includes a geometry program that represents the wing analytically, and a velocity program that computes the external velocity components from a given experimental pressure distribution when the external velocity distribution is not computed theoretically. The boundary layer method is general, however, and can also be used for an external velocity distribution computed theoretically. Several test cases were computed by this method and the results were checked with other numerical calculations and with experiments when available. A typical computation time (CPU) on an IBM 370/165 computer for one surface of a wing which roughly consist of 30 spanwise stations and 25 streamwise stations, with 30 points across the boundary layer is less than 30 seconds for an incompressible flow and a little more for a compressible flow.

  19. System and Method for Monitoring Distributed Asset Data

    NASA Technical Reports Server (NTRS)

    Gorinevsky, Dimitry (Inventor)

    2015-01-01

    A computer-based monitoring system and monitoring method implemented in computer software for detecting, estimating, and reporting the condition states, their changes, and anomalies for many assets. The assets are of same type, are operated over a period of time, and outfitted with data collection systems. The proposed monitoring method accounts for variability of working conditions for each asset by using regression model that characterizes asset performance. The assets are of the same type but not identical. The proposed monitoring method accounts for asset-to-asset variability; it also accounts for drifts and trends in the asset condition and data. The proposed monitoring system can perform distributed processing of massive amounts of historical data without discarding any useful information where moving all the asset data into one central computing system might be infeasible. The overall processing is includes distributed preprocessing data records from each asset to produce compressed data.

  20. The sequentially discounting autoregressive (SDAR) method for on-line automatic seismic event detecting on long term observation

    NASA Astrophysics Data System (ADS)

    Wang, L.; Toshioka, T.; Nakajima, T.; Narita, A.; Xue, Z.

    2017-12-01

    In recent years, more and more Carbon Capture and Storage (CCS) studies focus on seismicity monitoring. For the safety management of geological CO2 storage at Tomakomai, Hokkaido, Japan, an Advanced Traffic Light System (ATLS) combined different seismic messages (magnitudes, phases, distributions et al.) is proposed for injection controlling. The primary task for ATLS is the seismic events detection in a long-term sustained time series record. Considering the time-varying characteristics of Signal to Noise Ratio (SNR) of a long-term record and the uneven energy distributions of seismic event waveforms will increase the difficulty in automatic seismic detecting, in this work, an improved probability autoregressive (AR) method for automatic seismic event detecting is applied. This algorithm, called sequentially discounting AR learning (SDAR), can identify the effective seismic event in the time series through the Change Point detection (CPD) of the seismic record. In this method, an anomaly signal (seismic event) can be designed as a change point on the time series (seismic record). The statistical model of the signal in the neighborhood of event point will change, because of the seismic event occurrence. This means the SDAR aims to find the statistical irregularities of the record thought CPD. There are 3 advantages of SDAR. 1. Anti-noise ability. The SDAR does not use waveform messages (such as amplitude, energy, polarization) for signal detecting. Therefore, it is an appropriate technique for low SNR data. 2. Real-time estimation. When new data appears in the record, the probability distribution models can be automatic updated by SDAR for on-line processing. 3. Discounting property. the SDAR introduces a discounting parameter to decrease the influence of present statistic value on future data. It makes SDAR as a robust algorithm for non-stationary signal processing. Within these 3 advantages, the SDAR method can handle the non-stationary time-varying long-term series and achieve real-time monitoring. Finally, we employ the SDAR on a synthetic model and Tomakomai Ocean Bottom Cable (OBC) baseline data to prove the feasibility and advantage of our method.

  1. Two dimensional distribution measurement of electric current generated in a polymer electrolyte fuel cell using 49 NMR surface coils.

    PubMed

    Ogawa, Kuniyasu; Sasaki, Tatsuyoshi; Yoneda, Shigeki; Tsujinaka, Kumiko; Asai, Ritsuko

    2018-05-17

    In order to increase the current density generated in a PEFC (polymer electrolyte fuel cell), a method for measuring the spatial distribution of both the current and the water content of the MEA (membrane electrode assembly) is necessary. Based on the frequency shifts of NMR (nuclear magnetic resonance) signals acquired from the water contained in the MEA using 49 NMR coils in a 7 × 7 arrangement inserted in the PEFC, a method for measuring the two-dimensional spatial distribution of electric current generated in a unit cell with a power generation area of 140 mm × 160 mm was devised. We also developed an inverse analysis method to determine the two-dimensional electric current distribution that can be applied to actual PEFC connections. Two analytical techniques, namely coarse graining of segments and stepwise search, were used to shorten the calculation time required for inverse analysis of the electric current map. Using this method and techniques, spatial distributions of electric current and water content in the MEA were obtained when the PEFC generated electric power at 100 A. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Influence of the weighing bar position in vessel on measurement of cement’s particle size distribution by using the buoyancy weighing-bar method

    NASA Astrophysics Data System (ADS)

    Tambun, R.; Sihombing, R. O.; Simanjuntak, A.; Hanum, F.

    2018-02-01

    The buoyancy weighing-bar method is a new simple and cost-effective method to determine the particle size distribution both settling and floating particle. In this method, the density change in a suspension due to particle migration is measured by weighing buoyancy against a weighing-bar hung in the suspension, and then the particle size distribution is calculated using the length of the bar and the time-course change in the mass of the bar. The apparatus of this method consists of a weighing-bar and an analytical balance with a hook for under-floor weighing. The weighing bar is used to detect the density change in suspension. In this study we investigate the influences of position of weighing bar in vessel on settling particle size distribution measurements of cement by using the buoyancy weighing-bar method. The vessel used in this experiment is graduated cylinder with the diameter of 65 mm and the position of weighing bar is in center and off center of vessel. The diameter of weighing bar in this experiment is 10 mm, and the kerosene is used as a dispersion liquids. The results obtained show that the positions of weighing bar in vessel have no significant effect on determination the cement’s particle size distribution by using buoyancy weighing-bar method, and the results obtained are comparable to those measured by using settling balance method.

  3. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  4. Census Methods for Caribbean Land Birds

    Treesearch

    Joseph M. Wunderle

    1994-01-01

    Different census methods can be used to survey the distribution of a terrestrial bird species or to monitor population changes. The appropriate method depends on whether the objective is simply to document the presence of a species or to quantify its relative abundance, population density, population trends over time, habitat use, survivorship, or the physical...

  5. Bayesian analysis of multimodal data and brain imaging

    NASA Astrophysics Data System (ADS)

    Assadi, Amir H.; Eghbalnia, Hamid; Backonja, Miroslav; Wakai, Ronald T.; Rutecki, Paul; Haughton, Victor

    2000-06-01

    It is often the case that information about a process can be obtained using a variety of methods. Each method is employed because of specific advantages over the competing alternatives. An example in medical neuro-imaging is the choice between fMRI and MEG modes where fMRI can provide high spatial resolution in comparison to the superior temporal resolution of MEG. The combination of data from varying modes provides the opportunity to infer results that may not be possible by means of any one mode alone. We discuss a Bayesian and learning theoretic framework for enhanced feature extraction that is particularly suited to multi-modal investigations of massive data sets from multiple experiments. In the following Bayesian approach, acquired knowledge (information) regarding various aspects of the process are all directly incorporated into the formulation. This information can come from a variety of sources. In our case, it represents statistical information obtained from other modes of data collection. The information is used to train a learning machine to estimate a probability distribution, which is used in turn by a second machine as a prior, in order to produce a more refined estimation of the distribution of events. The computational demand of the algorithm is handled by proposing a distributed parallel implementation on a cluster of workstations that can be scaled to address real-time needs if required. We provide a simulation of these methods on a set of synthetically generated MEG and EEG data. We show how spatial and temporal resolutions improve by using prior distributions. The method on fMRI signals permits one to construct the probability distribution of the non-linear hemodynamics of the human brain (real data). These computational results are in agreement with biologically based measurements of other labs, as reported to us by researchers from UK. We also provide preliminary analysis involving multi-electrode cortical recording that accompanies behavioral data in pain experiments on freely moving mice subjected to moderate heat delivered by an electric bulb. Summary of new or breakthrough ideas: (1) A new method to estimate probability distribution for measurement of nonlinear hemodynamics of brain from a multi- modal neuronal data. This is the first time that such an idea is tried, to our knowledge. (2) Breakthrough in improvement of time resolution of fMRI signals using (1) above.

  6. Measurement of baseline and orientation between distributed aerospace platforms.

    PubMed

    Wang, Wen-Qin

    2013-01-01

    Distributed platforms play an important role in aerospace remote sensing, radar navigation, and wireless communication applications. However, besides the requirement of high accurate time and frequency synchronization for coherent signal processing, the baseline between the transmitting platform and receiving platform and the orientation of platform towards each other during data recording must be measured in real time. In this paper, we propose an improved pulsed duplex microwave ranging approach, which allows determining the spatial baseline and orientation between distributed aerospace platforms by the proposed high-precision time-interval estimation method. This approach is novel in the sense that it cancels the effect of oscillator frequency synchronization errors due to separate oscillators that are used in the platforms. Several performance specifications are also discussed. The effectiveness of the approach is verified by simulation results.

  7. A feasibility study of altered spatial distribution of losses induced by eddy currents in body composition analysis

    PubMed Central

    2010-01-01

    Background Tomographic imaging has revealed that the body mass index does not give a reliable state of overall fitness. However, high measurement costs make the tomographic imaging unsuitable for large scale studies or repeated individual use. This paper reports an experimental investigation of a new electromagnetic method and its feasibility for assessing body composition. The method is called body electrical loss analysis (BELA). Methods The BELA method uses a high-Q parallel resonant circuit to produce a time-varying magnetic field. The Q of the resonator changes when the sample is placed in its coil. This is caused by induced eddy currents in the sample. The new idea in the BELA method is the altered spatial distribution of the electrical losses generated by these currents. The distribution of losses is varied using different excitation frequencies. The feasibility of the method was tested using simplified phantoms. Two of these phantoms were rough estimations of human torso. One had fat in the middle of its volume and saline solution in the outer shell volume. The other had reversed conductivity distributions. The phantoms were placed in the resonator and the change in the losses was measured. Five different excitation frequencies from 100 kHz to 200 kHz were used. Results The rate of loss as a function of frequency was observed to be approximately three times larger for a phantom with fat in the middle of its volume than for one with fat in its outer shell volume. Conclusions At higher frequencies the major signal contribution can be shifted toward outer shell volume. This enables probing the conductivity distribution of the subject by weighting outer structural components. The authors expect that the loss changing rate over frequency can be a potential index for body composition analysis. PMID:21047441

  8. Optimal joule heating of the subsurface

    DOEpatents

    Berryman, James G.; Daily, William D.

    1994-01-01

    A method for simultaneously heating the subsurface and imaging the effects of the heating. This method combines the use of tomographic imaging (electrical resistance tomography or ERT) to image electrical resistivity distribution underground, with joule heating by electrical currents injected in the ground. A potential distribution is established on a series of buried electrodes resulting in energy deposition underground which is a function of the resistivity and injection current density. Measurement of the voltages and currents also permits a tomographic reconstruction of the resistivity distribution. Using this tomographic information, the current injection pattern on the driving electrodes can be adjusted to change the current density distribution and thus optimize the heating. As the heating changes conditions, the applied current pattern can be repeatedly adjusted (based on updated resistivity tomographs) to affect real time control of the heating.

  9. An Estimation Method of Waiting Time for Health Service at Hospital by Using a Portable RFID and Robust Estimation

    NASA Astrophysics Data System (ADS)

    Ishigaki, Tsukasa; Yamamoto, Yoshinobu; Nakamura, Yoshiyuki; Akamatsu, Motoyuki

    Patients that have an health service by doctor have to wait long time at many hospitals. The long waiting time is the worst factor of patient's dissatisfaction for hospital service according to questionnaire for patients. The present paper describes an estimation method of the waiting time for each patient without an electronic medical chart system. The method applies a portable RFID system to data acquisition and robust estimation of probability distribution of the health service and test time by doctor for high-accurate waiting time estimation. We carried out an health service of data acquisition at a real hospital and verified the efficiency of the proposed method. The proposed system widely can be used as data acquisition system in various fields such as marketing service, entertainment or human behavior measurement.

  10. Direct simulation of groundwater age

    USGS Publications Warehouse

    Goode, Daniel J.

    1996-01-01

    A new method is proposed to simulate groundwater age directly, by use of an advection-dispersion transport equation with a distributed zero-order source of unit (1) strength, corresponding to the rate of aging. The dependent variable in the governing equation is the mean age, a mass-weighted average age. The governing equation is derived from residence-time-distribution concepts for the case of steady flow. For the more general case of transient flow, a transient governing equation for age is derived from mass-conservation principles applied to conceptual “age mass.” The age mass is the product of the water mass and its age, and age mass is assumed to be conserved during mixing. Boundary conditions include zero age mass flux across all noflow and inflow boundaries and no age mass dispersive flux across outflow boundaries. For transient-flow conditions, the initial distribution of age must be known. The solution of the governing transport equation yields the spatial distribution of the mean groundwater age and includes diffusion, dispersion, mixing, and exchange processes that typically are considered only through tracer-specific solute transport simulation. Traditional methods have relied on advective transport to predict point values of groundwater travel time and age. The proposed method retains the simplicity and tracer-independence of advection-only models, but incorporates the effects of dispersion and mixing on volume-averaged age. Example simulations of age in two idealized regional aquifer systems, one homogeneous and the other layered, demonstrate the agreement between the proposed method and traditional particle-tracking approaches and illustrate use of the proposed method to determine the effects of diffusion, dispersion, and mixing on groundwater age.

  11. Application of a Threshold Method to Airborne-Spaceborne Attenuating-Wavelength Radars for the Estimation of Space-Time Rain-Rate Statistics.

    NASA Astrophysics Data System (ADS)

    Meneghini, Robert

    1998-09-01

    A method is proposed for estimating the area-average rain-rate distribution from attenuating-wavelength spaceborne or airborne radar data. Because highly attenuated radar returns yield unreliable estimates of the rain rate, these are eliminated by means of a proxy variable, Q, derived from the apparent radar reflectivity factors and a power law relating the attenuation coefficient and the reflectivity factor. In determining the probability distribution function of areawide rain rates, the elimination of attenuated measurements at high rain rates and the loss of data at light rain rates, because of low signal-to-noise ratios, leads to truncation of the distribution at the low and high ends. To estimate it over all rain rates, a lognormal distribution is assumed, the parameters of which are obtained from a nonlinear least squares fit to the truncated distribution. Implementation of this type of threshold method depends on the method used in estimating the high-resolution rain-rate estimates (e.g., either the standard Z-R or the Hitschfeld-Bordan estimate) and on the type of rain-rate estimate (either point or path averaged). To test the method, measured drop size distributions are used to characterize the rain along the radar beam. Comparisons with the standard single-threshold method or with the sample mean, taken over the high-resolution estimates, show that the present method usually provides more accurate determinations of the area-averaged rain rate if the values of the threshold parameter, QT, are chosen in the range from 0.2 to 0.4.

  12. Phase walk analysis of leptokurtic time series.

    PubMed

    Schreiber, Korbinian; Modest, Heike I; Räth, Christoph

    2018-06-01

    The Fourier phase information play a key role for the quantified description of nonlinear data. We present a novel tool for time series analysis that identifies nonlinearities by sensitively detecting correlations among the Fourier phases. The method, being called phase walk analysis, is based on well established measures from random walk analysis, which are now applied to the unwrapped Fourier phases of time series. We provide an analytical description of its functionality and demonstrate its capabilities on systematically controlled leptokurtic noise. Hereby, we investigate the properties of leptokurtic time series and their influence on the Fourier phases of time series. The phase walk analysis is applied to measured and simulated intermittent time series, whose probability density distribution is approximated by power laws. We use the day-to-day returns of the Dow-Jones industrial average, a synthetic time series with tailored nonlinearities mimicing the power law behavior of the Dow-Jones and the acceleration of the wind at an Atlantic offshore site. Testing for nonlinearities by means of surrogates shows that the new method yields strong significances for nonlinear behavior. Due to the drastically decreased computing time as compared to embedding space methods, the number of surrogate realizations can be increased by orders of magnitude. Thereby, the probability distribution of the test statistics can very accurately be derived and parameterized, which allows for much more precise tests on nonlinearities.

  13. Tsunami simulation method initiated from waveforms observed by ocean bottom pressure sensors for real-time tsunami forecast; Applied for 2011 Tohoku Tsunami

    NASA Astrophysics Data System (ADS)

    Tanioka, Yuichiro

    2017-04-01

    After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami simulation. By assuming that this computed tsunami is a real tsunami and observed at ocean bottom sensors, new tsunami simulation is carried out using the above method. The station distribution (each station is separated by 15 min., about 30 km) observed tsunami waveforms which were actually computed from the source model. Tsunami height distributions are estimated from the above method at 40, 80, and 120 seconds after the origin time of the earthquake. The Near-field Tsunami Inundation forecast method (Gusman et al. 2014) was used to estimate the tsunami inundation along the Sanriku coast. The result shows that the observed tsunami inundation was well explained by those estimated inundation. This also shows that it takes about 10 minutes to estimate the tsunami inundation from the origin time of the earthquake. This new method developed in this paper is very effective for a real-time tsunami forecast.

  14. Using of the geoinformation technologies for estimation of heavy metals distribution in the soils of urban ecosystems (on example of the city of Kaliningrad, Russia)

    NASA Astrophysics Data System (ADS)

    Chupakhina, Nataliia; Skrypnik, Lubov; Maslennikov, Pavel; Belov, Nicolai; Feduraev, Pavel; Chupakhina, Galina

    2017-04-01

    Urbanization can be described as a global socio-economic process, accompanied by a profound change of the anthropogenic environment and as a replacement of the natural ecosystems by the urban ones. Heavy metals occupy an important place among the different types of urban environmental pollutants. Since they do not undergo physico-chemical and biological degradation, they can accumulate in the surface soil layer for a long time, being available for the roots of plants and actively involved in the migration processes via trophic pathways. Study of accumulation of heavy metals in the most important component of urban ecosystems, which is soils, allows us to get a reliable estimate of the intensity of technogenic processes and to trace the major migration flows of these toxicants in the urbanized area. The geographic information systems (GIS) are a useful tool for collection, analysis, processing, synthesis and management of the spatially-distributed and other types of data. They provide the two-way communication between cartographic objects and databases. The aim of this study was to investigate the possibility of using of GIS technologies for estimating of distribution of heavy metals in the soil of the city of Kaliningrad. A Kaliningrad land region of 18.4 sq.km was investigated. Locations for the collection of samples were determined based on the analysis of anthropogenic loading of the streets of Kaliningrad. The total number of the locations was 57. The selected locations were marked with squares of 1.5 km per 1.5 km. Within each square 7-9 soil samples were collected using the "envelope" method, each sample was collected three times. The abundances of heavy metals (strontium, lead, zinc, copper, nickel, chromium, arsenic) in the soil was determined using the X-ray fluorescence method (Spectroscan Max, NPO Spektron, Saint-Petersburg, Russia). Each sample was purified, in order to remove roots, large rocks, glass, etc., before placing to the cell of the spectrometer. Purified samples of 10-50 g were dried to constant mass at a specified temperature (105 ± 2) ° C and then were grounded using an agate mortar to reduce the particle size to ≤71 microns. The analysis was simultaneously performed for two parts of each sample. For calculation of the surface distribution of the heavy metals we used the geographic GIS package QuantumGIS 2.8. The coordinates of the sampling points were first marked at the raster substrate and then were exported to a vector layer in the Shapefile format. To calculate the surface distribution in the raster format we used the topotorastr module, which includes different interpolation methods. For the visualization of this study we used two methods: inverse-weighted state (IDW) and natural neighborhood methods). The results obtained with the IDW method appeared to be not representative and we further used only the method of natural neighborhood. Using the built-in tools of the QuantumGIS the heavy metal abundances were divided into 9 classes and the raster surfaces were obtained. We then built contour plots of the distribution of the studied heavy metals. The described approach revealed two areas in which the lead abundance 4,5-5,4 times exceeded the maximum permitted norms, arsenic 9.5 - 11 times, zinc 7.2 - 9.6 times and nickel 5.2 - 6.75 times.

  15. Bayesian functional integral method for inferring continuous data from discrete measurements.

    PubMed

    Heuett, William J; Miller, Bernard V; Racette, Susan B; Holloszy, John O; Chow, Carson C; Periwal, Vipul

    2012-02-08

    Inference of the insulin secretion rate (ISR) from C-peptide measurements as a quantification of pancreatic β-cell function is clinically important in diseases related to reduced insulin sensitivity and insulin action. ISR derived from C-peptide concentration is an example of nonparametric Bayesian model selection where a proposed ISR time-course is considered to be a "model". An inferred value of inaccessible continuous variables from discrete observable data is often problematic in biology and medicine, because it is a priori unclear how robust the inference is to the deletion of data points, and a closely related question, how much smoothness or continuity the data actually support. Predictions weighted by the posterior distribution can be cast as functional integrals as used in statistical field theory. Functional integrals are generally difficult to evaluate, especially for nonanalytic constraints such as positivity of the estimated parameters. We propose a computationally tractable method that uses the exact solution of an associated likelihood function as a prior probability distribution for a Markov-chain Monte Carlo evaluation of the posterior for the full model. As a concrete application of our method, we calculate the ISR from actual clinical C-peptide measurements in human subjects with varying degrees of insulin sensitivity. Our method demonstrates the feasibility of functional integral Bayesian model selection as a practical method for such data-driven inference, allowing the data to determine the smoothing timescale and the width of the prior probability distribution on the space of models. In particular, our model comparison method determines the discrete time-step for interpolation of the unobservable continuous variable that is supported by the data. Attempts to go to finer discrete time-steps lead to less likely models. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Improving the efficiency of configurational-bias Monte Carlo: A density-guided method for generating bending angle trials for linear and branched molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sepehri, Aliasghar; Loeffler, Troy D.; Chen, Bin, E-mail: binchen@lsu.edu

    2014-08-21

    A new method has been developed to generate bending angle trials to improve the acceptance rate and the speed of configurational-bias Monte Carlo. Whereas traditionally the trial geometries are generated from a uniform distribution, in this method we attempt to use the exact probability density function so that each geometry generated is likely to be accepted. In actual practice, due to the complexity of this probability density function, a numerical representation of this distribution function would be required. This numerical table can be generated a priori from the distribution function. This method has been tested on a united-atom model ofmore » alkanes including propane, 2-methylpropane, and 2,2-dimethylpropane, that are good representatives of both linear and branched molecules. It has been shown from these test cases that reasonable approximations can be made especially for the highly branched molecules to reduce drastically the dimensionality and correspondingly the amount of the tabulated data that is needed to be stored. Despite these approximations, the dependencies between the various geometrical variables can be still well considered, as evident from a nearly perfect acceptance rate achieved. For all cases, the bending angles were shown to be sampled correctly by this method with an acceptance rate of at least 96% for 2,2-dimethylpropane to more than 99% for propane. Since only one trial is required to be generated for each bending angle (instead of thousands of trials required by the conventional algorithm), this method can dramatically reduce the simulation time. The profiling results of our Monte Carlo simulation code show that trial generation, which used to be the most time consuming process, is no longer the time dominating component of the simulation.« less

  17. Characterizing 3D grain size distributions from 2D sections in mylonites using a modified version of the Saltykov method

    NASA Astrophysics Data System (ADS)

    Lopez-Sanchez, Marco; Llana-Fúnez, Sergio

    2016-04-01

    The understanding of creep behaviour in rocks requires knowledge of 3D grain size distributions (GSD) that result from dynamic recrystallization processes during deformation. The methods to estimate directly the 3D grain size distribution -serial sectioning, synchrotron or X-ray-based tomography- are expensive, time-consuming and, in most cases and at best, challenging. This means that in practice grain size distributions are mostly derived from 2D sections. Although there are a number of methods in the literature to derive the actual 3D grain size distributions from 2D sections, the most popular in highly deformed rocks is the so-called Saltykov method. It has though two major drawbacks: the method assumes no interaction between grains, which is not true in the case of recrystallised mylonites; and uses histograms to describe distributions, which limits the quantification of the GSD. The first aim of this contribution is to test whether the interaction between grains in mylonites, i.e. random grain packing, affects significantly the GSDs estimated by the Saltykov method. We test this using the random resampling technique in a large data set (n = 12298). The full data set is built from several parallel thin sections that cut a completely dynamically recrystallized quartz aggregate in a rock sample from a Variscan shear zone in NW Spain. The results proved that the Saltykov method is reliable as long as the number of grains is large (n > 1000). Assuming that a lognormal distribution is an optimal approximation for the GSD in a completely dynamically recrystallized rock, we introduce an additional step to the Saltykov method, which allows estimating a continuous probability distribution function of the 3D grain size population. The additional step takes the midpoints of the classes obtained by the Saltykov method and fits a lognormal distribution with a trust region using a non-linear least squares algorithm. The new protocol is named the two-step method. The conclusion of this work is that both the Saltykov and the two-step methods are accurate and simple enough to be useful in practice in rocks, alloys or ceramics with near-equant grains and expected lognormal distributions. The Saltykov method is particularly suitable to estimate the volumes of particular grain fractions, while the two-step method to quantify the full GSD (mean and standard deviation in log grain size). The two-step method is implemented in a free, open-source and easy-to-handle script (see http://marcoalopez.github.io/GrainSizeTools/).

  18. Nonmechanistic forecasts of seasonal influenza with iterative one-week-ahead distributions.

    PubMed

    Brooks, Logan C; Farrow, David C; Hyun, Sangwon; Tibshirani, Ryan J; Rosenfeld, Roni

    2018-06-15

    Accurate and reliable forecasts of seasonal epidemics of infectious disease can assist in the design of countermeasures and increase public awareness and preparedness. This article describes two main contributions we made recently toward this goal: a novel approach to probabilistic modeling of surveillance time series based on "delta densities", and an optimization scheme for combining output from multiple forecasting methods into an adaptively weighted ensemble. Delta densities describe the probability distribution of the change between one observation and the next, conditioned on available data; chaining together nonparametric estimates of these distributions yields a model for an entire trajectory. Corresponding distributional forecasts cover more observed events than alternatives that treat the whole season as a unit, and improve upon multiple evaluation metrics when extracting key targets of interest to public health officials. Adaptively weighted ensembles integrate the results of multiple forecasting methods, such as delta density, using weights that can change from situation to situation. We treat selection of optimal weightings across forecasting methods as a separate estimation task, and describe an estimation procedure based on optimizing cross-validation performance. We consider some details of the data generation process, including data revisions and holiday effects, both in the construction of these forecasting methods and when performing retrospective evaluation. The delta density method and an adaptively weighted ensemble of other forecasting methods each improve significantly on the next best ensemble component when applied separately, and achieve even better cross-validated performance when used in conjunction. We submitted real-time forecasts based on these contributions as part of CDC's 2015/2016 FluSight Collaborative Comparison. Among the fourteen submissions that season, this system was ranked by CDC as the most accurate.

  19. Portraits of Principal Practice: Time Allocation and School Principal Work

    ERIC Educational Resources Information Center

    Sebastian, James; Camburn, Eric M.; Spillane, James P.

    2018-01-01

    Purpose: The purpose of this study was to examine how school principals in urban settings distributed their time working on critical school functions. We also examined who principals worked with and how their time allocation patterns varied by school contextual characteristics. Research Method/Approach: The study was conducted in an urban school…

  20. On-Line Water Quality Parameters as Indicators of Distribution System Contamination

    EPA Science Inventory

    At a time when the safety and security of services we have typically taken for granted are under question, a real-time or near real-time method of monitoring changes in water quality parameters could provide a critical line of defense in protecting public health. This study was u...

  1. An observationally centred method to quantify local climate change as a distribution

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2013-04-01

    For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This requires quantifying trends at specific quantiles in distributions of variables such as daily temperature or precipitation. These non-normal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow the trend in the distribution mean. We present a method[1] for analysing local climatic timeseries data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily temperature from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, we thus obtain as outputs how the trend or sensitivity varies with temperature (or occurrence likelihood), and with geographical location. These sensitivities are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, On Estimating Local Long Term Climate Trends, Phil. Trans. R. Soc. A, in press [2] Haylock, M.R., N. Hofstra, A.M.G. Klein Tank, E.J. Klok, P.D. Jones and M. New. 2008: A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res (Atmospheres), 113, D20119, doi:10.1029/2008JD10201

  2. Finite time synchronization of memristor-based Cohen-Grossberg neural networks with mixed delays.

    PubMed

    Chen, Chuan; Li, Lixiang; Peng, Haipeng; Yang, Yixian

    2017-01-01

    Finite time synchronization, which means synchronization can be achieved in a settling time, is desirable in some practical applications. However, most of the published results on finite time synchronization don't include delays or only include discrete delays. In view of the fact that distributed delays inevitably exist in neural networks, this paper aims to investigate the finite time synchronization of memristor-based Cohen-Grossberg neural networks (MCGNNs) with both discrete delay and distributed delay (mixed delays). By means of a simple feedback controller and novel finite time synchronization analysis methods, several new criteria are derived to ensure the finite time synchronization of MCGNNs with mixed delays. The obtained criteria are very concise and easy to verify. Numerical simulations are presented to demonstrate the effectiveness of our theoretical results.

  3. Exact short-time height distribution for the flat Kardar-Parisi-Zhang interface

    NASA Astrophysics Data System (ADS)

    Smith, Naftali R.; Meerson, Baruch

    2018-05-01

    We determine the exact short-time distribution -lnPf(" close=")H ,t )">H ,t =Sf(H )/√{t } of the one-point height H =h (x =0 ,t ) of an evolving 1 +1 Kardar-Parisi-Zhang (KPZ) interface for flat initial condition. This is achieved by combining (i) the optimal fluctuation method, (ii) a time-reversal symmetry of the KPZ equation in 1 +1 dimension, and (iii) the recently determined exact short-time height distribution -lnPst(H ) of the latter, one encounters two branches: an analytic and a nonanalytic. The analytic branch is nonphysical beyond a critical value of H where a second-order dynamical phase transition occurs. Here we show that, remarkably, it is the analytic branch of Sst(H ) which determines the large-deviation function Sf(H ) of the flat interface via a simple mapping Sf(H )=2-3 /2SstThe use of social media and mobile device applications to disseminate natural hazard information by Natural Resources Canada

    NASA Astrophysics Data System (ADS)

    Bird, A. L.; Ulmi, M.; Majewski, C.; Hayek, K.; Edwards, W.; McCormack, D. A.; Cole, R. T.; de Paor, D. R.

    2011-12-01

    Public expectation of near-instant and reliable information is constantly rising. Such expectation puts increasing demands on organizations charged with providing the public with information on hazard events in near-real-time, while ensuring quality and accuracy of content. Natural Resources Canada (NRCan) has responded by augmenting existing methods for earthquake information distribution with new and varied methods for relaying natural hazards information. We profile tools now employed operationally by NRCan to distribute earthquake information to emergency measures organizations, news media and the public. Also presented will be an example of a smart-'phone application which includes several tools for natural hazard preparedness and response, supplemented with automated real-time alerts.

  4. Continuous preparation of carbon-nanotube-supported platinum catalysts in a flow reactor directly heated by electric current

    PubMed Central

    dos Santos, Antonio Rodolfo; Kunz, Ulrich; Turek, Thomas

    2011-01-01

    Summary In this contribution we present for the first time a continuous process for the production of highly active Pt catalysts supported by carbon nanotubes by use of an electrically heated tubular reactor. The synthesized catalysts show a high degree of dispersion and narrow distributions of cluster sizes. In comparison to catalysts synthesized by the conventional oil-bath method a significantly higher electrocatalytic activity was reached, which can be attributed to the higher metal loading and smaller and more uniformly distributed Pt particles on the carbon support. Our approach introduces a simple, time-saving and cost-efficient method for fuel cell catalyst preparation in a flow reactor which could be used at a large scale. PMID:22043252

  5. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    NASA Astrophysics Data System (ADS)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  6. Pressure Distribution in Nonuniform Two-Dimensional Flow

    NASA Technical Reports Server (NTRS)

    Schwabe, M.

    1943-01-01

    In an attempt to follow the time rate of change of the processes in turbulent flows by quantitative measurements the measurement of the pressure is often beset with insuperable difficulties for the reason that the speeds and hence the pressures to be measured are often very small. On the other hand, the measurement of very small pressures requires, at least, considerable time, so that the follow-up of periodically varying processes is as goad as impossible. In order to obviate these difficulties a method, suggested by Prof. Prandtl, has been developed by which the pressure distribution is simply determined from the photographic flow picture. This method is described and proved on a worked-out example. It was found that quantitatively very satisfactory results can be achieved.

  7. The forecasting of menstruation based on a state-space modeling of basal body temperature time series.

    PubMed

    Fukaya, Keiichi; Kawamori, Ai; Osada, Yutaka; Kitazawa, Masumi; Ishiguro, Makio

    2017-09-20

    Women's basal body temperature (BBT) shows a periodic pattern that associates with menstrual cycle. Although this fact suggests a possibility that daily BBT time series can be useful for estimating the underlying phase state as well as for predicting the length of current menstrual cycle, little attention has been paid to model BBT time series. In this study, we propose a state-space model that involves the menstrual phase as a latent state variable to explain the daily fluctuation of BBT and the menstruation cycle length. Conditional distributions of the phase are obtained by using sequential Bayesian filtering techniques. A predictive distribution of the next menstruation day can be derived based on this conditional distribution and the model, leading to a novel statistical framework that provides a sequentially updated prediction for upcoming menstruation day. We applied this framework to a real data set of women's BBT and menstruation days and compared prediction accuracy of the proposed method with that of previous methods, showing that the proposed method generally provides a better prediction. Because BBT can be obtained with relatively small cost and effort, the proposed method can be useful for women's health management. Potential extensions of this framework as the basis of modeling and predicting events that are associated with the menstrual cycles are discussed. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. Method for adding nodes to a quantum key distribution system

    DOEpatents

    Grice, Warren P

    2015-02-24

    An improved quantum key distribution (QKD) system and method are provided. The system and method introduce new clients at intermediate points along a quantum channel, where any two clients can establish a secret key without the need for a secret meeting between the clients. The new clients perform operations on photons as they pass through nodes in the quantum channel, and participate in a non-secret protocol that is amended to include the new clients. The system and method significantly increase the number of clients that can be supported by a conventional QKD system, with only a modest increase in cost. The system and method are compatible with a variety of QKD schemes, including polarization, time-bin, continuous variable and entanglement QKD.

  9. Methods for combining payload parameter variations with input environment. [calculating design limit loads compatible with probabilistic structural design criteria

    NASA Technical Reports Server (NTRS)

    Merchant, D. H.

    1976-01-01

    Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.

  10. Statistical characteristics of surrogate data based on geophysical measurements

    NASA Astrophysics Data System (ADS)

    Venema, V.; Bachner, S.; Rust, H. W.; Simmer, C.

    2006-09-01

    In this study, the statistical properties of a range of measurements are compared with those of their surrogate time series. Seven different records are studied, amongst others, historical time series of mean daily temperature, daily rain sums and runoff from two rivers, and cloud measurements. Seven different algorithms are used to generate the surrogate time series. The best-known method is the iterative amplitude adjusted Fourier transform (IAAFT) algorithm, which is able to reproduce the measured distribution as well as the power spectrum. Using this setup, the measurements and their surrogates are compared with respect to their power spectrum, increment distribution, structure functions, annual percentiles and return values. It is found that the surrogates that reproduce the power spectrum and the distribution of the measurements are able to closely match the increment distributions and the structure functions of the measurements, but this often does not hold for surrogates that only mimic the power spectrum of the measurement. However, even the best performing surrogates do not have asymmetric increment distributions, i.e., they cannot reproduce nonlinear dynamical processes that are asymmetric in time. Furthermore, we have found deviations of the structure functions on small scales.

  11. Population density approach for discrete mRNA distributions in generalized switching models for stochastic gene expression.

    PubMed

    Stinchcombe, Adam R; Peskin, Charles S; Tranchina, Daniel

    2012-06-01

    We present a generalization of a population density approach for modeling and analysis of stochastic gene expression. In the model, the gene of interest fluctuates stochastically between an inactive state, in which transcription cannot occur, and an active state, in which discrete transcription events occur; and the individual mRNA molecules are degraded stochastically in an independent manner. This sort of model in simplest form with exponential dwell times has been used to explain experimental estimates of the discrete distribution of random mRNA copy number. In our generalization, the random dwell times in the inactive and active states, T_{0} and T_{1}, respectively, are independent random variables drawn from any specified distributions. Consequently, the probability per unit time of switching out of a state depends on the time since entering that state. Our method exploits a connection between the fully discrete random process and a related continuous process. We present numerical methods for computing steady-state mRNA distributions and an analytical derivation of the mRNA autocovariance function. We find that empirical estimates of the steady-state mRNA probability mass function from Monte Carlo simulations of laboratory data do not allow one to distinguish between underlying models with exponential and nonexponential dwell times in some relevant parameter regimes. However, in these parameter regimes and where the autocovariance function has negative lobes, the autocovariance function disambiguates the two types of models. Our results strongly suggest that temporal data beyond the autocovariance function is required in general to characterize gene switching.

  12. Joint distribution of temperature and precipitation in the Mediterranean, using the Copula method

    NASA Astrophysics Data System (ADS)

    Lazoglou, Georgia; Anagnostopoulou, Christina

    2018-03-01

    This study analyses the temperature and precipitation dependence among stations in the Mediterranean. The first station group is located in the eastern Mediterranean (EM) and includes two stations, Athens and Thessaloniki, while the western (WM) one includes Malaga and Barcelona. The data was organized in two time periods, the hot-dry period and the cold-wet one, composed of 5 months, respectively. The analysis is based on a new statistical technique in climatology: the Copula method. Firstly, the calculation of the Kendall tau correlation index showed that temperatures among stations are dependant during both time periods whereas precipitation presents dependency only between the stations located in EM or WM and only during the cold-wet period. Accordingly, the marginal distributions were calculated for each studied station, as they are further used by the copula method. Finally, several copula families, both Archimedean and Elliptical, were tested in order to choose the most appropriate one to model the relation of the studied data sets. Consequently, this study achieves to model the dependence of the main climate parameters (temperature and precipitation) with the Copula method. The Frank copula was identified as the best family to describe the joint distribution of temperature, for the majority of station groups. For precipitation, the best copula families are BB1 and Survival Gumbel. Using the probability distribution diagrams, the probability of a combination of temperature and precipitation values between stations is estimated.

  13. Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering

    USGS Publications Warehouse

    Geist, Eric L.

    2012-01-01

    Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

  14. The simulation of a two-dimensional (2D) transport problem in a rectangular region with Lattice Boltzmann method with two-relaxation-time

    NASA Astrophysics Data System (ADS)

    Sugiyanto, S.; Hardyanto, W.; Marwoto, P.

    2018-03-01

    Transport phenomena are found in many problems in many engineering and industrial sectors. We analyzed a Lattice Boltzmann method with Two-Relaxation Time (LTRT) collision operators for simulation of pollutant moving through the medium as a two-dimensional (2D) transport problem in a rectangular region model. This model consists of a 2D rectangular region with 54 length (x), 27 width (y), and it has isotropic homogeneous medium. Initially, the concentration is zero and is distributed evenly throughout the region of interest. A concentration of 1 is maintained at 9 < y < 18, whereas the concentration of zero is maintained at 0 < y < 9 and 18 < y < 27. A specific discharge (Darcy velocity) of 1.006 is assumed. A diffusion coefficient of 0.8333 is distributed uniformly with a uniform porosity of 0.35. A computer program is written in MATLAB to compute the concentration of pollutant at any specified place and time. The program shows that LTRT solution with quadratic equilibrium distribution functions (EDFs) and relaxation time τa=1.0 are in good agreement result with other numerical solutions methods such as 3DLEWASTE (Hybrid Three-dimensional Lagrangian-Eulerian Finite Element Model of Waste Transport Through Saturated-Unsaturated Media) obtained by Yeh and 3DFEMWATER-LHS (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media with Latin Hypercube Sampling) obtained by Hardyanto.

  15. Wavelet-based functional linear mixed models: an application to measurement error-corrected distributed lag models.

    PubMed

    Malloy, Elizabeth J; Morris, Jeffrey S; Adar, Sara D; Suh, Helen; Gold, Diane R; Coull, Brent A

    2010-07-01

    Frequently, exposure data are measured over time on a grid of discrete values that collectively define a functional observation. In many applications, researchers are interested in using these measurements as covariates to predict a scalar response in a regression setting, with interest focusing on the most biologically relevant time window of exposure. One example is in panel studies of the health effects of particulate matter (PM), where particle levels are measured over time. In such studies, there are many more values of the functional data than observations in the data set so that regularization of the corresponding functional regression coefficient is necessary for estimation. Additional issues in this setting are the possibility of exposure measurement error and the need to incorporate additional potential confounders, such as meteorological or co-pollutant measures, that themselves may have effects that vary over time. To accommodate all these features, we develop wavelet-based linear mixed distributed lag models that incorporate repeated measures of functional data as covariates into a linear mixed model. A Bayesian approach to model fitting uses wavelet shrinkage to regularize functional coefficients. We show that, as long as the exposure error induces fine-scale variability in the functional exposure profile and the distributed lag function representing the exposure effect varies smoothly in time, the model corrects for the exposure measurement error without further adjustment. Both these conditions are likely to hold in the environmental applications we consider. We examine properties of the method using simulations and apply the method to data from a study examining the association between PM, measured as hourly averages for 1-7 days, and markers of acute systemic inflammation. We use the method to fully control for the effects of confounding by other time-varying predictors, such as temperature and co-pollutants.

  16. Googling Service Boundaries for Endovascular Clot Retrieval Hub Hospitals in a Metropolitan Setting: Proof-of-Concept Study.

    PubMed

    Phan, Thanh G; Beare, Richard; Chen, Jian; Clissold, Benjamin; Ly, John; Singhal, Shaloo; Ma, Henry; Srikanth, Velandai

    2017-05-01

    There is great interest in how endovascular clot retrieval hubs provide services to a population. We applied a computational method to objectively generate service boundaries for such endovascular clot retrieval hubs, defined by traveling time to hub. Stroke incidence data merged with population census to estimate numbers of stroke in metropolitan Melbourne, Australia. Traveling time from randomly generated addresses to 4 endovascular clot retrieval-capable hubs (Royal Melbourne Hospital [RMH], Monash Medical Center [MMC], Alfred Hospital [ALF], and Austin Hospital [AUS]) estimated using Google Map application program interface. Boundary maps generated based on traveling time at various times of day for combinations of hubs. In a 2-hub model, catchment was best distributed when RMH was paired with MMC (model 1a, RMH 1765 km 2 and MMC 1164 km 2 ) or with AUS (model 1c, RMH 1244 km 2 and AUS 1685 km 2 ), with no statistical difference between models ( P =0.20). Catchment was poorly distributed when RMH was paired with ALF (model 1b, RMH 2252 km 2 and ALF 676 km 2 ), significantly different from both models 1a and 1c (both P <0.05). Model 1a had the greatest proportion of patients arriving within ideal time of 30 minutes followed by model 1c ( P <0.001). In a 3-hub model, the combination of RMH, MMC, and AUS was superior to that of RMH, MMC, and ALF in catchment distribution and travel time. The method was also successfully applied to the city of Adelaide demonstrating wider applicability. We provide proof of concept for a novel computational method to objectively designate service boundaries for endovascular clot retrieval hubs. © 2017 American Heart Association, Inc.

  17. [Rank distributions in community ecology from the statistical viewpoint].

    PubMed

    Maksimov, V N

    2004-01-01

    Traditional statistical methods for definition of empirical functions of abundance distribution (population, biomass, production, etc.) of species in a community are applicable for processing of multivariate data contained in the above quantitative indices of the communities. In particular, evaluation of moments of distribution suffices for convolution of the data contained in a list of species and their abundance. At the same time, the species should be ranked in the list in ascending rather than descending population and the distribution models should be analyzed on the basis of the data on abundant species only.

  18. Planning of distributed generation in distribution network based on improved particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Li, Jinze; Qu, Zhi; He, Xiaoyang; Jin, Xiaoming; Li, Tie; Wang, Mingkai; Han, Qiu; Gao, Ziji; Jiang, Feng

    2018-02-01

    Large-scale access of distributed power can improve the current environmental pressure, at the same time, increasing the complexity and uncertainty of overall distribution system. Rational planning of distributed power can effectively improve the system voltage level. To this point, the specific impact on distribution network power quality caused by the access of typical distributed power was analyzed and from the point of improving the learning factor and the inertia weight, an improved particle swarm optimization algorithm (IPSO) was proposed which could solve distributed generation planning for distribution network to improve the local and global search performance of the algorithm. Results show that the proposed method can well reduce the system network loss and improve the economic performance of system operation with distributed generation.

  19. Expected Utility Distributions for Flexible, Contingent Execution

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard

    2000-01-01

    This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.

  1. Modeling stream fish distributions using interval-censored detection times.

    PubMed

    Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro

    2016-08-01

    Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.

  2. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  3. Reaction Event Counting Statistics of Biopolymer Reaction Systems with Dynamic Heterogeneity.

    PubMed

    Lim, Yu Rim; Park, Seong Jun; Park, Bo Jung; Cao, Jianshu; Silbey, Robert J; Sung, Jaeyoung

    2012-04-10

    We investigate the reaction event counting statistics (RECS) of an elementary biopolymer reaction in which the rate coefficient is dependent on states of the biopolymer and the surrounding environment and discover a universal kinetic phase transition in the RECS of the reaction system with dynamic heterogeneity. From an exact analysis for a general model of elementary biopolymer reactions, we find that the variance in the number of reaction events is dependent on the square of the mean number of the reaction events when the size of measurement time is small on the relaxation time scale of rate coefficient fluctuations, which does not conform to renewal statistics. On the other hand, when the size of the measurement time interval is much greater than the relaxation time of rate coefficient fluctuations, the variance becomes linearly proportional to the mean reaction number in accordance with renewal statistics. Gillespie's stochastic simulation method is generalized for the reaction system with a rate coefficient fluctuation. The simulation results confirm the correctness of the analytic results for the time dependent mean and variance of the reaction event number distribution. On the basis of the obtained results, we propose a method of quantitative analysis for the reaction event counting statistics of reaction systems with rate coefficient fluctuations, which enables one to extract information about the magnitude and the relaxation times of the fluctuating reaction rate coefficient, without a bias that can be introduced by assuming a particular kinetic model of conformational dynamics and the conformation dependent reactivity. An exact relationship is established between a higher moment of the reaction event number distribution and the multitime correlation of the reaction rate for the reaction system with a nonequilibrium initial state distribution as well as for the system with the equilibrium initial state distribution.

  4. Phenytoin crystal growth rates in the presence of phosphate and chloride ions

    NASA Astrophysics Data System (ADS)

    Zipp, G. L.; Rodríguez-Hornedo, N.

    1992-09-01

    Phenytoin crystal growth kinetics have been measured as a function of supersaturation in pH 2.2 phosphoric acid and pH 2.2 hydrochloric acid solutions. Two different methods were used for the kinetic analysis. The first involved a zone-sensing device which provided an analysis of the distribution of crystals in a batch crystallizer. Crystal growth rates were calculated from the increase in the size of the distribution with time. In the second method, growth rates were evaluated from the change in size with time of individual crystals observed under an inverted microscope. The results from each method compare favorably. The use of both techniques provides an excellent opportunity to exploit the strengths of each: an average growth rate from a population of crystals from batch crystallization and insight into the effect of growth on the morphology of the crystals from the individual crystal measurements.

  5. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  6. On the spray pulsations of the effervescent atomizers

    NASA Astrophysics Data System (ADS)

    Mlkvik, Marek; Knizat, Branislav

    2018-06-01

    The presented paper focuses on the comparison of the two effervescent atomizer configurations—the outside-in-gas (OIG) and the outside-in-liquid (OIL). The comparison was based on the spray pulsation assessment by different methods. The atomizers were tested under the same operating conditions given by the constant injection pressure (0.14 MPa) and the gas to the liquid mass ratio (GLR) varying from 2.5 to 5%. The aqueous maltodextrin solution was used as the working liquid (μ = 60 and 146 mPa·s). We found that the time-averaging method does not provide sufficient spray quality description. Based on the cumulative distribution function (CDF) we found that the OIG atomizer generated the spray with non-uniform droplet size distribution at all investigated GLRs. Exceptionally large droplets were present even in the spray which appeared stable when was analyzed by the time-averaging method.

  7. One shot methods for optimal control of distributed parameter systems 1: Finite dimensional control

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1991-01-01

    The efficient numerical treatment of optimal control problems governed by elliptic partial differential equations (PDEs) and systems of elliptic PDEs, where the control is finite dimensional is discussed. Distributed control as well as boundary control cases are discussed. The main characteristic of the new methods is that they are designed to solve the full optimization problem directly, rather than accelerating a descent method by an efficient multigrid solver for the equations involved. The methods use the adjoint state in order to achieve efficient smoother and a robust coarsening strategy. The main idea is the treatment of the control variables on appropriate scales, i.e., control variables that correspond to smooth functions are solved for on coarse grids depending on the smoothness of these functions. Solution of the control problems is achieved with the cost of solving the constraint equations about two to three times (by a multigrid solver). Numerical examples demonstrate the effectiveness of the method proposed in distributed control case, pointwise control and boundary control problems.

  8. Strategies for distributing cancer screening decision aids in primary care.

    PubMed

    Brackett, Charles; Kearing, Stephen; Cochran, Nan; Tosteson, Anna N A; Blair Brooks, W

    2010-02-01

    Decision aids (DAs) have been shown to facilitate shared decision making about cancer screening. However, little data exist on optimal strategies for dissemination. Our objective was to compare different decision aid distribution models. Eligible patients received video decision aids for prostate cancer (PSA) or colon cancer screening (CRC) through 4 distribution methods. Outcome measures included DA loans (N), % of eligible patients receiving DA, and patient and provider satisfaction. Automatically mailing DAs to all age/gender appropriate patients led to near universal receipt by screening-eligible patients, but also led to ineligible patients receiving DAs. Three different elective (non-automatic) strategies led to low rates of receipt. Clinician satisfaction was higher when patients viewed the DA before the visit, and this model facilitated implementation of the screening choice. Regardless of timing or distribution method, patient satisfaction was high. An automatic DA distribution method is more effective than relying on individual initiative. Enabling patients to view the DA before the visit is preferred. Systematically offering DAs to all eligible patients before their appointments is the ideal strategy, but may be challenging to implement. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  9. Distribution of Genetic Marker Concentrations for Fecal Indicator Bacteria in Sewage and Animal Feces

    EPA Science Inventory

    The application of quantitative real-time PCR (qPCR) methods for the identification of fecal microorganisms in surface waters has the potential to revolutionize water quality monitoring worldwide. Unlike traditional cultivation methods, qPCR estimates the concentration of gen...

  10. Real Time Distributed Embedded Oscillator Operating Frequency Monitoring

    NASA Technical Reports Server (NTRS)

    Pollock, Julie (Inventor); Oliver, Brett D. (Inventor); Brickner, Christopher (Inventor)

    2013-01-01

    A method for clock monitoring in a network is provided. The method comprises receiving a first network clock signal at a network device and comparing the first network clock signal to a local clock signal from a primary oscillator coupled to the network device.

  11. Real-time evaluation of two light delivery systems for photodynamic disinfection of Candida albicans biofilm in curved root canals.

    PubMed

    Sabino, C P; Garcez, A S; Núñez, S C; Ribeiro, M S; Hamblin, M R

    2015-08-01

    Antimicrobial photodynamic therapy (APDT) combined with endodontic treatment has been recognized as an alternative approach to complement conventional root canal disinfection methods on bacterial biofilms. We developed an in  vitro model of bioluminescent Candida albicans biofilm inside curved dental root canals and investigated the microbial reduction produced when different light delivery methods are employed. Each light delivery method was evaluated in respect to the light distribution provided inside curved root canals. After conventional endodontic preparation, teeth were sterilized before canals were contaminated by a bioluminescent strain of C. albicans (CEC789). Methylene blue (90 μM) was introduced into the canals and then irradiated (λ = 660 nm, P = 100 mW, beam diameter = 2 mm) with laser tip either in contact with pulp chamber or within the canal using an optical diffuser fiber. Light distribution was evaluated by CCD camera, and microbial reduction was monitored through bioluminescence imaging. Our findings demonstrated that the bioluminescent C. albicans biofilm model had good reproducibility and uniformity. Light distribution in dental tissue was markedly dependent on the light delivery system, and this strategy was directly related to microbial destruction. Both light delivery systems performed significant fungal inactivation. However, when irradiation was performed with optical diffuser fiber, microbial burden reduction was nearly 100 times more effective. Bioluminescence is an interesting real-time analysis to endodontic C. albicans biofilm inactivation. APDT showed to be an effective way to inactivate C. albicans biofilms. Diffuser fibers provided optimized light distribution inside curved root canals and significantly increased APDT efficiency.

  12. The Rafita asteroid family

    NASA Astrophysics Data System (ADS)

    Aljbaae, S.; Carruba, V.; Masiero, J. R.; Domingos, R. C.; Huaman, M.

    2017-05-01

    The Rafita asteroid family is an S-type group located in the middle main belt, on the right-hand side of the 3J:-1A mean-motion resonance. The proximity of this resonance to the family left-hand side in the semimajor axis caused many former family members to be lost. As a consequence, the family shape in the (a, 1/D) domain is quite asymmetrical, with a preponderance of objects on the right-hand side of the distribution. The Rafita family is also characterized by a leptokurtic distribution in inclination, which allows the use of methods of family age estimation recently introduced for other leptokurtic families such as Astrid, Hansa, Gallia and Barcelona. In this work, we propose a new method based on the behaviour of an asymmetry coefficient function of the distribution in the (a, 1/D) plane to date incomplete asteroid families such as Rafita. By monitoring the time behaviour of this coefficient for asteroids simulating the initial conditions at the time of the family formation, we were able to estimate that the Rafita family should have an age of 490 ± 200 Myr, in good agreement with results from independent methods such as Monte Carlo simulations of Yarkovsky and YORP dynamical induced evolution and the time behaviour of the kurtosis of the sin (I) distribution. Asteroids from the Rafita family can reach orbits similar to 8 per cent of the currently known near-Earth objects. During the final 10 Myr of the simulation, ≃1 per cent of the simulated objects are present in NEO space, and thus would be comparable to objects in the present-day NEO population.

  13. Real-time evaluation of two light delivery systems for photodynamic disinfection of Candida albicans biofilm in curved root canals

    PubMed Central

    Sabino, C. P.; Garcez, A. S.; Núñez, S. C.; Ribeiro, M. S.; Hamblin, M. R.

    2014-01-01

    Antimicrobial photodynamic therapy (APDT) combined with endodontic treatment has been recognized as an alternative approach to complement conventional root canal disinfection methods on bacterial biofilms. We developed an in vitro model of bioluminescent Candida albicans biofilm inside curved dental root canals and investigated the microbial reduction produced when different light delivery methods are employed. Each light delivery method was evaluated in respect to the light distribution provided inside curved root canals. After conventional endodontic preparation, teeth were sterilized before canals were contaminated by a bioluminescent strain of C. albicans (CEC789). Methylene blue (90 µM) was introduced into the canals and then irradiated (λ=660 nm, P=100 mW, beam diameter=2 mm) with laser tip either in contact with pulp chamber or within the canal using an optical diffuser fiber. Light distribution was evaluated by CCD camera, and microbial reduction was monitored through bioluminescence imaging. Our findings demonstrated that the bioluminescent C. albicans biofilm model had good reproducibility and uniformity. Light distribution in dental tissue was markedly dependent on the light delivery system, and this strategy was directly related to microbial destruction. Both light delivery systems performed significant fungal inactivation. However, when irradiation was performed with optical diffuser fiber, microbial burden reduction was nearly 100 times more effective. Bioluminescence is an interesting real-time analysis to endodontic C. albicans biofilm inactivation. APDT showed to be an effective way to inactivate C. albicans biofilms. Diffuser fibers provided optimized light distribution inside curved root canals and significantly increased APDT efficiency. PMID:25060900

  14. Parallel Logic Programming Architecture

    DTIC Science & Technology

    1990-04-01

    Section 3.1. 3.1. A STATIC ALLOCATION SCHEME (SAS) Methods that have been used for decomposing distributed problems in artificial intelligence...multiple agents, knowledge organization and allocation, and cooperative parallel execution. These difficulties are common to distributed artificial ...for the following reasons. First, intellegent backtracking requires much more bookkeeping and is therefore more costly during consult-time and during

  15. Spatial regression methods capture prediction uncertainty in species distribution model projections through time

    Treesearch

    Alan K. Swanson; Solomon Z. Dobrowski; Andrew O. Finley; James H. Thorne; Michael K. Schwartz

    2013-01-01

    The uncertainty associated with species distribution model (SDM) projections is poorly characterized, despite its potential value to decision makers. Error estimates from most modelling techniques have been shown to be biased due to their failure to account for spatial autocorrelation (SAC) of residual error. Generalized linear mixed models (GLMM) have the ability to...

  16. A novel method for the investigation of liquid/liquid distribution coefficients and interface permeabilities applied to the water-octanol-drug system.

    PubMed

    Stein, Paul C; di Cagno, Massimiliano; Bauer-Brandl, Annette

    2011-09-01

    In this work a new, accurate and convenient technique for the measurement of distribution coefficients and membrane permeabilities based on nuclear magnetic resonance (NMR) is described. This method is a novel implementation of localized NMR spectroscopy and enables the simultaneous analysis of the drug content in the octanol and in the water phase without separation. For validation of the method, the distribution coefficients at pH = 7.4 of four active pharmaceutical ingredients (APIs), namely ibuprofen, ketoprofen, nadolol, and paracetamol (acetaminophen), were determined using a classical approach. These results were compared to the NMR experiments which are described in this work. For all substances, the respective distribution coefficients found with the two techniques coincided very well. Furthermore, the NMR experiments make it possible to follow the distribution of the drug between the phases as a function of position and time. Our results show that the technique, which is available on any modern NMR spectrometer, is well suited to the measurement of distribution coefficients. The experiments present also new insight into the dynamics of the water-octanol interface itself and permit measurement of the interface permeability.

  17. Estimating rainfall time series and model parameter distributions using model data reduction and inversion techniques

    NASA Astrophysics Data System (ADS)

    Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.

    2017-08-01

    Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.

  18. Estimating probable flaw distributions in PWR steam generator tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorman, J.A.; Turner, A.P.L.

    1997-02-01

    This paper describes methods for estimating the number and size distributions of flaws of various types in PWR steam generator tubes. These estimates are needed when calculating the probable primary to secondary leakage through steam generator tubes under postulated accidents such as severe core accidents and steam line breaks. The paper describes methods for two types of predictions: (1) the numbers of tubes with detectable flaws of various types as a function of time, and (2) the distributions in size of these flaws. Results are provided for hypothetical severely affected, moderately affected and lightly affected units. Discussion is provided regardingmore » uncertainties and assumptions in the data and analyses.« less

  19. Geometric MCMC for infinite-dimensional inverse problems

    NASA Astrophysics Data System (ADS)

    Beskos, Alexandros; Girolami, Mark; Lan, Shiwei; Farrell, Patrick E.; Stuart, Andrew M.

    2017-04-01

    Bayesian inverse problems often involve sampling posterior distributions on infinite-dimensional function spaces. Traditional Markov chain Monte Carlo (MCMC) algorithms are characterized by deteriorating mixing times upon mesh-refinement, when the finite-dimensional approximations become more accurate. Such methods are typically forced to reduce step-sizes as the discretization gets finer, and thus are expensive as a function of dimension. Recently, a new class of MCMC methods with mesh-independent convergence times has emerged. However, few of them take into account the geometry of the posterior informed by the data. At the same time, recently developed geometric MCMC algorithms have been found to be powerful in exploring complicated distributions that deviate significantly from elliptic Gaussian laws, but are in general computationally intractable for models defined in infinite dimensions. In this work, we combine geometric methods on a finite-dimensional subspace with mesh-independent infinite-dimensional approaches. Our objective is to speed up MCMC mixing times, without significantly increasing the computational cost per step (for instance, in comparison with the vanilla preconditioned Crank-Nicolson (pCN) method). This is achieved by using ideas from geometric MCMC to probe the complex structure of an intrinsic finite-dimensional subspace where most data information concentrates, while retaining robust mixing times as the dimension grows by using pCN-like methods in the complementary subspace. The resulting algorithms are demonstrated in the context of three challenging inverse problems arising in subsurface flow, heat conduction and incompressible flow control. The algorithms exhibit up to two orders of magnitude improvement in sampling efficiency when compared with the pCN method.

  20. The instantaneous frequency rate spectrogram

    NASA Astrophysics Data System (ADS)

    Czarnecki, Krzysztof

    2016-01-01

    An accelerogram of the instantaneous phase of signal components referred to as an instantaneous frequency rate spectrogram (IFRS) is presented as a joint time-frequency distribution. The distribution is directly obtained by processing the short-time Fourier transform (STFT) locally. A novel approach to amplitude demodulation based upon the reassignment method is introduced as a useful by-product. Additionally, an estimator of energy density versus the instantaneous frequency rate (IFR) is proposed and referred to as the IFR profile. The energy density is estimated based upon both the classical energy spectrogram and the IFRS smoothened by the median filter. Moreover, the impact of an analyzing window width, additive white Gaussian noise and observation time is tested. Finally, the introduced method is used for the analysis of the acoustic emission of an automotive engine. The recording of the engine of a Lamborghini Gallardo is analyzed as an example.

  1. Listing triangles in expected linear time on a class of power law graphs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann

    Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less

  2. Time and frequency technology at NIST

    NASA Technical Reports Server (NTRS)

    Sullivan, D. B.

    1994-01-01

    The state of development of advanced timing systems at NIST is described. The work on cesium and rubidium frequency standards, stored-ion frequency standards, diode lasers used to pump such standards, time transfer, and methods for characterizing clocks, oscillators, and time distribution systems is presented. The emphasis is on NIST-developed technology rather than the general state of the art in this field.

  3. Considering the reversibility of passive and reactive transport problems: Are forward-in-time and backward-in-time models ever equivalent?

    NASA Astrophysics Data System (ADS)

    Engdahl, N.

    2017-12-01

    Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.

  4. A distribution method for analysing the baseline of pulsatile endocrine signals as exemplified by 24-hour growth hormone profiles.

    PubMed

    Matthews, D R; Hindmarsh, P C; Pringle, P J; Brook, C G

    1991-09-01

    To develop a method for quantifying the distribution of concentrations present in hormone profiles, which would allow an observer-unbiased estimate of the time concentration attribute and to make an assessment of the baseline. The log-transformed concentrations (regardless of their temporal attribute) are sorted and allocated to class intervals. The number of observations in each interval are then determined and expressed as a percentage of the total number of samples drawn in the study period. The data may be displayed as a frequency distribution or as a cumulative distribution. Cumulative distributions may be plotted as sigmoidal ogives or can be transformed into discrete probabilities (linear probits), which are then linear, and amenable to regression analysis. Probability analysis gives estimates of the mean (the value below which 50% of the observed concentrations lie, which we term 'OC50'). 'Baseline' can be defined in terms of percentage occupancy--the 'Observed Concentration for 5%' (which we term 'OC5') which is the threshold at or below which the hormone concentrations are measured 5% of the time. We report the use of applying this method to 24-hour growth hormone (GH) profiles from 63 children, 26 adults and one giant. We demonstrate that GH effects (growth or gigantism) in these groups are more related to the baseline OC5 concentration than peak concentration (OC5 +/- 95% confidence limits: adults 0.05 +/- 0.04, peak-height-velocity pubertal 0.39 +/- 0.22, giant 8.9 mU/l). Pulsatile hormone profiles can be analysed using this method in order to assess baseline and other concentration domains.

  5. Real time testing of intelligent relays for synchronous distributed generation islanding detection

    NASA Astrophysics Data System (ADS)

    Zhuang, Davy

    As electric power systems continue to grow to meet ever-increasing energy demand, their security, reliability, and sustainability requirements also become more stringent. The deployment of distributed energy resources (DER), including generation and storage, in conventional passive distribution feeders, gives rise to integration problems involving protection and unintentional islanding. Distributed generators need to be islanded for safety reasons when disconnected or isolated from the main feeder as distributed generator islanding may create hazards to utility and third-party personnel, and possibly damage the distribution system infrastructure, including the distributed generators. This thesis compares several key performance indicators of a newly developed intelligent islanding detection relay, against islanding detection devices currently used by the industry. The intelligent relay employs multivariable analysis and data mining methods to arrive at decision trees that contain both the protection handles and the settings. A test methodology is developed to assess the performance of these intelligent relays on a real time simulation environment using a generic model based on a real-life distribution feeder. The methodology demonstrates the applicability and potential advantages of the intelligent relay, by running a large number of tests, reflecting a multitude of system operating conditions. The testing indicates that the intelligent relay often outperforms frequency, voltage and rate of change of frequency relays currently used for islanding detection, while respecting the islanding detection time constraints imposed by standing distributed generator interconnection guidelines.

  6. Multiscale power analysis for heart rate variability

    NASA Astrophysics Data System (ADS)

    Zeng, Peng; Liu, Hongxing; Ni, Huangjing; Zhou, Jing; Xia, Lan; Ning, Xinbao

    2015-06-01

    We first introduce multiscale power (MSP) method to assess the power distribution of physiological signals on multiple time scales. Simulation on synthetic data and experiments on heart rate variability (HRV) are tested to support the approach. Results show that both physical and psychological changes influence power distribution significantly. A quantitative parameter, termed power difference (PD), is introduced to evaluate the degree of power distribution alteration. We find that dynamical correlation of HRV will be destroyed completely when PD>0.7.

  7. Reversal time of jump-noise magnetization dynamics in nanomagnets via Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Parthasarathy, Arun; Rakheja, Shaloo

    2018-06-01

    The jump-noise is a nonhomogeneous Poisson process which models thermal effects in magnetization dynamics, with special applications in low temperature escape rate phenomena. In this work, we develop improved numerical methods for Monte Carlo simulation of the jump-noise dynamics and validate the method by comparing the stationary distribution obtained empirically against the Boltzmann distribution. In accordance with the Néel-Brown theory, the jump-noise dynamics display an exponential relaxation toward equilibrium with a characteristic reversal time, which we extract for nanomagnets with uniaxial and cubic anisotropy. We relate the jump-noise dynamics to the equivalent Landau-Lifshitz dynamics up to second order correction for a general energy landscape and obtain the analogous Néel-Brown theory's solution of the reversal time. We find that the reversal time of jump-noise dynamics is characterized by Néel-Brown theory's solution at the energy saddle point for small noise. For large noise, the magnetization reversal due to jump-noise dynamics phenomenologically represents macroscopic tunneling of magnetization.

  8. Research on human physiological parameters intelligent clothing based on distributed Fiber Bragg Grating

    NASA Astrophysics Data System (ADS)

    Miao, Changyun; Shi, Boya; Li, Hongqiang

    2008-12-01

    A human physiological parameters intelligent clothing is researched with FBG sensor technology. In this paper, the principles and methods of measuring human physiological parameters including body temperature and heart rate in intelligent clothing with distributed FBG are studied, the mathematical models of human physiological parameters measurement are built; the processing method of body temperature and heart rate detection signals is presented; human physiological parameters detection module is designed, the interference signals are filtered out, and the measurement accuracy is improved; the integration of the intelligent clothing is given. The intelligent clothing can implement real-time measurement, processing, storage and output of body temperature and heart rate. It has accurate measurement, portability, low cost, real-time monitoring, and other advantages. The intelligent clothing can realize the non-contact monitoring between doctors and patients, timely find the diseases such as cancer and infectious diseases, and make patients get timely treatment. It has great significance and value for ensuring the health of the elders and the children with language dysfunction.

  9. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  10. Preliminary Study of 2-D Time Domain Electromagnetic (TDEM) Modeling to Analyze Subsurface Resistivity Distribution and its Application to the Geothermal Systems

    NASA Astrophysics Data System (ADS)

    Aji Hapsoro, Cahyo; Purqon, Acep; Srigutomo, Wahyu

    2017-07-01

    2-D Time Domain Electromagnetic (TDEM) has been successfully conducted to illustrate the value of Electric field distribution under the Earth surface. Electric field compared by magnetic field is used to analyze resistivity and resistivity is one of physical properties which very important to determine the reservoir potential area of geothermal systems as one of renewable energy. In this modeling we used Time Domain Electromagnetic method because it can solve EM field interaction problem with complex geometry and to analyze transient problems. TDEM methods used to model the value of electric and magnetic fields as a function of the time combined with the function of distance and depth. The result of this modeling is Electric field intensity value which is capable to describe the structure of the Earth’s subsurface. The result of this modeling can be applied to describe the Earths subsurface resistivity values to determine the reservoir potential of geothermal systems.

  11. Comparison of viscous-shock-layer solutions by time-asymptotic and steady-state methods. [flow distribution around a Jupiter entry probe

    NASA Technical Reports Server (NTRS)

    Gupta, R. N.; Moss, J. N.; Simmonds, A. L.

    1982-01-01

    Two flow-field codes employing the time- and space-marching numerical techniques were evaluated. Both methods were used to analyze the flow field around a massively blown Jupiter entry probe under perfect-gas conditions. In order to obtain a direct point-by-point comparison, the computations were made by using identical grids and turbulence models. For the same degree of accuracy, the space-marching scheme takes much less time as compared to the time-marching method and would appear to provide accurate results for the problems with nonequilibrium chemistry, free from the effect of local differences in time on the final solution which is inherent in time-marching methods. With the time-marching method, however, the solutions are obtainable for the realistic entry probe shapes with massive or uniform surface blowing rates; whereas, with the space-marching technique, it is difficult to obtain converged solutions for such flow conditions. The choice of the numerical method is, therefore, problem dependent. Both methods give equally good results for the cases where results are compared with experimental data.

  12. A novel weight determination method for time series data aggregation

    NASA Astrophysics Data System (ADS)

    Xu, Paiheng; Zhang, Rong; Deng, Yong

    2017-09-01

    Aggregation in time series is of great importance in time series smoothing, predicting and other time series analysis process, which makes it crucial to address the weights in times series correctly and reasonably. In this paper, a novel method to obtain the weights in time series is proposed, in which we adopt induced ordered weighted aggregation (IOWA) operator and visibility graph averaging (VGA) operator and linearly combine the weights separately generated by the two operator. The IOWA operator is introduced to the weight determination of time series, through which the time decay factor is taken into consideration. The VGA operator is able to generate weights with respect to the degree distribution in the visibility graph constructed from the corresponding time series, which reflects the relative importance of vertices in time series. The proposed method is applied to two practical datasets to illustrate its merits. The aggregation of Construction Cost Index (CCI) demonstrates the ability of proposed method to smooth time series, while the aggregation of The Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) illustrate how proposed method maintain the variation tendency of original data.

  13. Time-resolved fluorescence monitoring of cholesterol in peripheral blood mononuclear cells

    NASA Astrophysics Data System (ADS)

    Martinakova, Z.; Horilova, J.; Lajdova, I.; Marcek Chorvatova, A.

    2014-12-01

    Precise evaluation of intracellular cholesterol distribution is crucial for improving diagnostics of diseased states associated with cholesterol alteration. Time-resolved fluorescence techniques are tested for non-invasive investigation of cholesterol in living cells. Fluorescent probe NBD attached to cholesterol was employed to evaluate cholesterol distribution in peripheral blood mononuclear cells (PBMC) isolated from the human blood. Fluorescence Lifetime Imaging Microscopy (FLIM) was successfully applied to simultaneously monitor the spatial distribution and the timeresolved characteristics of the NBD-cholesterol fluorescence in PBMC. Gathered data are the first step in the development of a new perspective non-invasive diagnostic method for evaluation of cholesterol modifications in diseases associated with disorders of lipid metabolism.

  14. NASA Team Collaboration Pilot: Enabling NASA's Virtual Teams

    NASA Technical Reports Server (NTRS)

    Prahst, Steve

    2003-01-01

    Most NASA projects and work activities are accomplished by teams of people. These teams are often geographically distributed - across NASA centers and NASA external partners, both domestic and international. NASA "virtual" teams are stressed by the challenge of getting team work done - across geographic boundaries and time zones. To get distributed work done, teams rely on established methods - travel, telephones, Video Teleconferencing (NASA VITS), and email. Time is our most critical resource - and team members are hindered by the overhead of travel and the difficulties of coordinating work across their virtual teams. Modern, Internet based team collaboration tools offer the potential to dramatically improve the ability of virtual teams to get distributed work done.

  15. Distributed collaborative response surface method for mechanical dynamic assembly reliability design

    NASA Astrophysics Data System (ADS)

    Bai, Guangchen; Fei, Chengwei

    2013-11-01

    Because of the randomness of many impact factors influencing the dynamic assembly relationship of complex machinery, the reliability analysis of dynamic assembly relationship needs to be accomplished considering the randomness from a probabilistic perspective. To improve the accuracy and efficiency of dynamic assembly relationship reliability analysis, the mechanical dynamic assembly reliability(MDAR) theory and a distributed collaborative response surface method(DCRSM) are proposed. The mathematic model of DCRSM is established based on the quadratic response surface function, and verified by the assembly relationship reliability analysis of aeroengine high pressure turbine(HPT) blade-tip radial running clearance(BTRRC). Through the comparison of the DCRSM, traditional response surface method(RSM) and Monte Carlo Method(MCM), the results show that the DCRSM is not able to accomplish the computational task which is impossible for the other methods when the number of simulation is more than 100 000 times, but also the computational precision for the DCRSM is basically consistent with the MCM and improved by 0.40˜4.63% to the RSM, furthermore, the computational efficiency of DCRSM is up to about 188 times of the MCM and 55 times of the RSM under 10000 times simulations. The DCRSM is demonstrated to be a feasible and effective approach for markedly improving the computational efficiency and accuracy of MDAR analysis. Thus, the proposed research provides the promising theory and method for the MDAR design and optimization, and opens a novel research direction of probabilistic analysis for developing the high-performance and high-reliability of aeroengine.

  16. Linear time algorithms to construct populations fitting multiple constraint distributions at genomic scales.

    PubMed

    Siragusa, Enrico; Haiminen, Niina; Utro, Filippo; Parida, Laxmi

    2017-10-09

    Computer simulations can be used to study population genetic methods, models and parameters, as well as to predict potential outcomes. For example, in plant populations, predicting the outcome of breeding operations can be studied using simulations. In-silico construction of populations with pre-specified characteristics is an important task in breeding optimization and other population genetic studies. We present two linear time Simulation using Best-fit Algorithms (SimBA) for two classes of problems where each co-fits two distributions: SimBA-LD fits linkage disequilibrium and minimum allele frequency distributions, while SimBA-hap fits founder-haplotype and polyploid allele dosage distributions. An incremental gap-filling version of previously introduced SimBA-LD is here demonstrated to accurately fit the target distributions, allowing efficient large scale simulations. SimBA-hap accuracy and efficiency is demonstrated by simulating tetraploid populations with varying numbers of founder haplotypes, we evaluate both a linear time greedy algoritm and an optimal solution based on mixed-integer programming. SimBA is available on http://researcher.watson.ibm.com/project/5669.

  17. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    PubMed

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  18. Application of a deconvolution method for identifying burst amplitudes and arrival times in Alcator C-Mod far SOL plasma fluctuations

    NASA Astrophysics Data System (ADS)

    Theodorsen, Audun; Garcia, Odd Erik; Kube, Ralph; Labombard, Brian; Terry, Jim

    2017-10-01

    In the far scrape-off layer (SOL), radial motion of filamentary structures leads to excess transport of particles and heat. Amplitudes and arrival times of these filaments have previously been studied by conditional averaging in single-point measurements from Langmuir Probes and Gas Puff Imaging (GPI). Conditional averaging can be problematic: the cutoff for large amplitudes is mostly chosen by convention; the conditional windows used may influence the arrival time distribution; and the amplitudes cannot be separated from a background. Previous work has shown that SOL fluctuations are well described by a stochastic model consisting of a super-position of pulses with fixed shape and randomly distributed amplitudes and arrival times. The model can be formulated as a pulse shape convolved with a train of delta pulses. By choosing a pulse shape consistent with the power spectrum of the fluctuation time series, Richardson-Lucy deconvolution can be used to recover the underlying amplitudes and arrival times of the delta pulses. We apply this technique to both L and H-mode GPI data from the Alcator C-Mod tokamak. The pulse arrival times are shown to be uncorrelated and uniformly distributed, consistent with a Poisson process, and the amplitude distribution has an exponential tail.

  19. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  20. Multifractal analysis of mobile social networks

    NASA Astrophysics Data System (ADS)

    Zheng, Wei; Zhang, Zifeng; Deng, Yufan

    2017-09-01

    As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.

  1. Analysis of an ultrasonically rotating droplet by moving particle semi-implicit and distributed point source method in a rotational coordinate

    NASA Astrophysics Data System (ADS)

    Wada, Yuji; Yuge, Kohei; Tanaka, Hiroki; Nakamura, Kentaro

    2017-07-01

    Numerical analysis on the rotation of an ultrasonically levitated droplet in centrifugal coordinate is discussed. A droplet levitated in an acoustic chamber is simulated using the distributed point source method and the moving particle semi-implicit method. Centrifugal coordinate is adopted to avoid the Laplacian differential error, which causes numerical divergence or inaccuracy in the global coordinate calculation. Consequently, the duration of calculation stability has increased 30 times longer than that in a the previous paper. Moreover, the droplet radius versus rotational acceleration characteristics show a similar trend to the theoretical and experimental values in the literature.

  2. Nonlinear-drifted Brownian motion with multiple hidden states for remaining useful life prediction of rechargeable batteries

    NASA Astrophysics Data System (ADS)

    Wang, Dong; Zhao, Yang; Yang, Fangfang; Tsui, Kwok-Leung

    2017-09-01

    Brownian motion with adaptive drift has attracted much attention in prognostics because its first hitting time is highly relevant to remaining useful life prediction and it follows the inverse Gaussian distribution. Besides linear degradation modeling, nonlinear-drifted Brownian motion has been developed to model nonlinear degradation. Moreover, the first hitting time distribution of the nonlinear-drifted Brownian motion has been approximated by time-space transformation. In the previous studies, the drift coefficient is the only hidden state used in state space modeling of the nonlinear-drifted Brownian motion. Besides the drift coefficient, parameters of a nonlinear function used in the nonlinear-drifted Brownian motion should be treated as additional hidden states of state space modeling to make the nonlinear-drifted Brownian motion more flexible. In this paper, a prognostic method based on nonlinear-drifted Brownian motion with multiple hidden states is proposed and then it is applied to predict remaining useful life of rechargeable batteries. 26 sets of rechargeable battery degradation samples are analyzed to validate the effectiveness of the proposed prognostic method. Moreover, some comparisons with a standard particle filter based prognostic method, a spherical cubature particle filter based prognostic method and two classic Bayesian prognostic methods are conducted to highlight the superiority of the proposed prognostic method. Results show that the proposed prognostic method has lower average prediction errors than the particle filter based prognostic methods and the classic Bayesian prognostic methods for battery remaining useful life prediction.

  3. Quantitative analysis of diet structure by real-time PCR, reveals different feeding patterns by two dominant grasshopper species

    PubMed Central

    Huang, Xunbing; Wu, Huihui; McNeill, Mark Richard; Qin, Xinghu; Ma, Jingchuan; Tu, Xiongbing; Cao, Guangchun; Wang, Guangjun; Nong, Xiangqun; Zhang, Zehua

    2016-01-01

    Studies on grasshopper diets have historically employed a range of methodologies, each with certain advantages and disadvantages. For example, some methodologies are qualitative instead of quantitative. Others require long experimental periods or examine population-level effects, only. In this study, we used real-time PCR to examine diets of individual grasshoppers. The method has the advantage of being both fast and quantitative. Using two grasshopper species, Oedaleus asiaticus and Dasyhippus barbipes, we designed ITS primer sequences for their three main host plants, Stipa krylovii, Leymus chinensis and Cleistogenes squarrosa and used real-time PCR method to test diet structure both qualitatively and quantitatively. The lowest detection efficiency of the three grass species was ~80% with a strong correlation between actual and PCR-measured food intake. We found that Oedaleus asiaticus maintained an unchanged diet structure across grasslands with different grass communities. By comparison, Dasyhippus barbipes changed its diet structure. These results revealed why O. asiaticus distribution is mainly confined to Stipa-dominated grassland, and D. barbipes is more widely distributed across Inner Mongolia. Overall, real-time PCR was shown to be a useful tool for investigating grasshopper diets, which in turn offers some insight into grasshopper distributions and improved pest management. PMID:27562455

  4. An overview of distributed microgrid state estimation and control for smart grids.

    PubMed

    Rana, Md Masud; Li, Li

    2015-02-12

    Given the significant concerns regarding carbon emission from the fossil fuels, global warming and energy crisis, the renewable distributed energy resources (DERs) are going to be integrated in the smart grid. This grid can spread the intelligence of the energy distribution and control system from the central unit to the long-distance remote areas, thus enabling accurate state estimation (SE) and wide-area real-time monitoring of these intermittent energy sources. In contrast to the traditional methods of SE, this paper proposes a novel accuracy dependent Kalman filter (KF) based microgrid SE for the smart grid that uses typical communication systems. Then this article proposes a discrete-time linear quadratic regulation to control the state deviations of the microgrid incorporating multiple DERs. Therefore, integrating these two approaches with application to the smart grid forms a novel contributions in green energy and control research communities. Finally, the simulation results show that the proposed KF based microgrid SE and control algorithm provides an accurate SE and control compared with the existing method.

  5. Dynamic cardiac PET imaging: extraction of time-activity curves using ICA and a generalized Gaussian distribution model.

    PubMed

    Mabrouk, Rostom; Dubeau, François; Bentabet, Layachi

    2013-01-01

    Kinetic modeling of metabolic and physiologic cardiac processes in small animals requires an input function (IF) and a tissue time-activity curves (TACs). In this paper, we present a mathematical method based on independent component analysis (ICA) to extract the IF and the myocardium's TACs directly from dynamic positron emission tomography (PET) images. The method assumes a super-Gaussian distribution model for the blood activity, and a sub-Gaussian distribution model for the tissue activity. Our appreach was applied on 22 PET measurement sets of small animals, which were obtained from the three most frequently used cardiac radiotracers, namely: desoxy-fluoro-glucose ((18)F-FDG), [(13)N]-ammonia, and [(11)C]-acetate. Our study was extended to PET human measurements obtained with the Rubidium-82 ((82) Rb) radiotracer. The resolved mathematical IF values compare favorably to those derived from curves extracted from regions of interest (ROI), suggesting that the procedure presents a reliable alternative to serial blood sampling for small-animal cardiac PET studies.

  6. A free market in telescope time?

    NASA Astrophysics Data System (ADS)

    Etherton, Jason; Steele, Iain A.; Mottram, Christopher J.

    2004-09-01

    As distributed systems are becoming more and more diverse in application there is a growing need for more intelligent resource scheduling. eSTAR Is a geographically distributed network of Grid-enabled telescopes, using grid middleware to provide telescope users with an authentication and authorisation method, allowing secure, remote access to such resources. The eSTAR paradigm is based upon this secure, single sign-on, giving astronomers or their agent proxies direct access to these telescopes. This concept, however, involves the complex issue of how to schedule observations stored within physically distributed media, on geographically distributed resources. This matter is complicated further by the varying degrees of constraints placed upon observations such as timeliness, atmospheric and meteorological conditions, and sky brightness to name a few. This paper discusses a free market approach to this scheduling problem, where astronomers are given credit, instead of time, from their respective TAGs to spend on telescopes as they see fit. This approach will ultimately provide a community-driven schedule, genuine indicators of the worth of specific telescope time and promote a more efficient use of that time, as well as demonstrating a 'survival of the fittest' type selection.

  7. Substation Reactive Power Regulation Strategy

    NASA Astrophysics Data System (ADS)

    Zhang, Junfeng; Zhang, Chunwang; Ma, Daqing

    2018-01-01

    With the increasing requirements on the power supply quality and reliability of distribution network, voltage and reactive power regulation of substations has become one of the indispensable ways to ensure voltage quality and reactive power balance and to improve the economy and reliability of distribution network. Therefore, it is a general concern of the current power workers and operators that what kind of flexible and effective control method should be used to adjust the on-load tap-changer (OLTC) transformer and shunt compensation capacitor in a substation to achieve reactive power balance in situ, improve voltage pass rate, increase power factor and reduce active power loss. In this paper, based on the traditional nine-zone diagram and combining with the characteristics of substation, a fuzzy variable-center nine-zone diagram control method is proposed and used to make a comprehensive regulation of substation voltage and reactive power. Through the calculation and simulation of the example, this method is proved to have satisfactorily reconciled the contradiction between reactive power and voltage in real-time control and achieved the basic goal of real-time control of the substation, providing a reference value to the practical application of the substation real-time control method.

  8. GOTHIC: Gravitational oct-tree code accelerated by hierarchical time step controlling

    NASA Astrophysics Data System (ADS)

    Miki, Yohei; Umemura, Masayuki

    2017-04-01

    The tree method is a widely implemented algorithm for collisionless N-body simulations in astrophysics well suited for GPU(s). Adopting hierarchical time stepping can accelerate N-body simulations; however, it is infrequently implemented and its potential remains untested in GPU implementations. We have developed a Gravitational Oct-Tree code accelerated by HIerarchical time step Controlling named GOTHIC, which adopts both the tree method and the hierarchical time step. The code adopts some adaptive optimizations by monitoring the execution time of each function on-the-fly and minimizes the time-to-solution by balancing the measured time of multiple functions. Results of performance measurements with realistic particle distribution performed on NVIDIA Tesla M2090, K20X, and GeForce GTX TITAN X, which are representative GPUs of the Fermi, Kepler, and Maxwell generation of GPUs, show that the hierarchical time step achieves a speedup by a factor of around 3-5 times compared to the shared time step. The measured elapsed time per step of GOTHIC is 0.30 s or 0.44 s on GTX TITAN X when the particle distribution represents the Andromeda galaxy or the NFW sphere, respectively, with 224 = 16,777,216 particles. The averaged performance of the code corresponds to 10-30% of the theoretical single precision peak performance of the GPU.

  9. A biologically relevant method for considering patterns of oceanic retention in the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Mori, Mao; Corney, Stuart P.; Melbourne-Thomas, Jessica; Klocker, Andreas; Sumner, Michael; Constable, Andrew

    2017-12-01

    Many marine species have planktonic forms - either during a larval stage or throughout their lifecycle - that move passively or are strongly influenced by ocean currents. Understanding these patterns of movement is important for informing marine ecosystem management and for understanding ecological processes generally. Retention of biological particles in a particular area due to ocean currents has received less attention than transport pathways, particularly for the Southern Ocean. We present a method for modelling retention time, based on the half-life for particles in a particular region, that is relevant for biological processes. This method uses geostrophic velocities at the ocean surface, derived from 23 years of satellite altimetry data (1993-2016), to simulate the advection of passive particles during the Southern Hemisphere summer season (from December to March). We assess spatial patterns in the retention time of passive particles and evaluate the processes affecting these patterns for the Indian sector of the Southern Ocean. Our results indicate that the distribution of retention time is related to bathymetric features and the resulting ocean dynamics. Our analysis also reveals a moderate level of consistency between spatial patterns of retention time and observations of Antarctic krill (Euphausia superba) distribution.

  10. Extension of classical hydrological risk analysis to non-stationary conditions due to climate change - application to the Fulda catchment, Germany

    NASA Astrophysics Data System (ADS)

    Fink, G.; Koch, M.

    2010-12-01

    An important aspect in water resources and hydrological engineering is the assessment of hydrological risk, due to the occurrence of extreme events, e.g. droughts or floods. When dealing with the latter - as is the focus here - the classical methods of flood frequency analysis (FFA) are usually being used for the proper dimensioning of a hydraulic structure, for the purpose of bringing down the flood risk to an acceptable level. FFA is based on extreme value statistics theory. Despite the progress of methods in this scientific branch, the development, decision, and fitting of an appropriate distribution function stills remains a challenge, particularly, when certain underlying assumptions of the theory are not met in real applications. This is, for example, the case when the stationarity-condition for a random flood time series is not satisfied anymore, as could be the situation when long-term hydrological impacts of future climate change are to be considered. The objective here is to verify the applicability of classical (stationary) FFA to predicted flood time series in the Fulda catchment in central Germany, as they may occur in the wake of climate change during the 21st century. These discharge time series at the outlet of the Fulda basin have been simulated with a distributed hydrological model (SWAT) that is forced by predicted climate variables of a regional climate model for Germany (REMO). From the simulated future daily time series, annual maximum (extremes) values are computed and analyzed for the purpose of risk evaluation. Although the 21st century estimated extreme flood series of the Fulda river turn out to be only mildly non-stationary, alleviating the need for further action and concern at the first sight, the more detailed analysis of the risk, as quantified, for example, by the return period, shows non-negligent differences in the calculated risk levels. This could be verified by employing a new method, the so-called flood series maximum analysis (FSMA) method, which consists in the stochastic simulation of numerous trajectories of a stochastic process with a given GEV-distribution over a certain length of time (> larger than a desired return period). Then the maximum value for each trajectory is computed, all of which are then used to determine the empirical distribution of this maximum series. Through graphical inversion of this distribution function the size of the design flood for a given risk (quantile) and given life duration can be inferred. The results of numerous simulations show that for stationary flood series, the new FSMA method results, expectedly, in nearly identical risk values as the classical FFA approach. However, once the flood time series becomes slightly non-stationary - for reasons as discussed - and regardless of whether the trend is increasing or decreasing, large differences in the computed risk values for a given design flood occur. Or in other word, for the same risk, the new FSMA method would lead to different values in the design flood for a hydraulic structure than the classical FFA method. This, in turn, could lead to some cost savings in the realization of a hydraulic project.

  11. A New Multiconstraint Method for Determining the Optimal Cable Stresses in Cable-Stayed Bridges

    PubMed Central

    Asgari, B.; Osman, S. A.; Adnan, A.

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method. PMID:25050400

  12. A new multiconstraint method for determining the optimal cable stresses in cable-stayed bridges.

    PubMed

    Asgari, B; Osman, S A; Adnan, A

    2014-01-01

    Cable-stayed bridges are one of the most popular types of long-span bridges. The structural behaviour of cable-stayed bridges is sensitive to the load distribution between the girder, pylons, and cables. The determination of pretensioning cable stresses is critical in the cable-stayed bridge design procedure. By finding the optimum stresses in cables, the load and moment distribution of the bridge can be improved. In recent years, different research works have studied iterative and modern methods to find optimum stresses of cables. However, most of the proposed methods have limitations in optimising the structural performance of cable-stayed bridges. This paper presents a multiconstraint optimisation method to specify the optimum cable forces in cable-stayed bridges. The proposed optimisation method produces less bending moments and stresses in the bridge members and requires shorter simulation time than other proposed methods. The results of comparative study show that the proposed method is more successful in restricting the deck and pylon displacements and providing uniform deck moment distribution than unit load method (ULM). The final design of cable-stayed bridges can be optimised considerably through proposed multiconstraint optimisation method.

  13. Area-Specific Marginal Costing for Electric Utilities: a Case Study of Transmission and Distribution Costs

    NASA Astrophysics Data System (ADS)

    Orans, Ren

    1990-10-01

    Existing procedures used to develop marginal costs for electric utilities were not designed for applications in an increasingly competitive market for electric power. The utility's value of receiving power, or the costs of selling power, however, depend on the exact location of the buyer or seller, the magnitude of the power and the period of time over which the power is used. Yet no electric utility in the United States has disaggregate marginal costs that reflect differences in costs due to the time, size or location of the load associated with their power or energy transactions. The existing marginal costing methods used by electric utilities were developed in response to the Public Utilities Regulatory Policy Act (PURPA) in 1978. The "ratemaking standards" (Title 1) established by PURPA were primarily concerned with the appropriate segmentation of total revenues to various classes-of-service, designing time-of-use rating periods, and the promotion of efficient long-term resource planning. By design, the methods were very simple and inexpensive to implement. Now, more than a decade later, the costing issues facing electric utilities are becoming increasingly complex, and the benefits of developing more specific marginal costs will outweigh the costs of developing this information in many cases. This research develops a framework for estimating total marginal costs that vary by the size, timing, and the location of changes in loads within an electric distribution system. To complement the existing work at the Electric Power Research Institute (EPRI) and Pacific Gas and Electric Company (PGandE) on estimating disaggregate generation and transmission capacity costs, this dissertation focuses on the estimation of distribution capacity costs. While the costing procedure is suitable for the estimation of total (generation, transmission and distribution) marginal costs, the empirical work focuses on the geographic disaggregation of marginal costs related to electric utility distribution investment. The study makes use of data from an actual distribution planning area, located within PGandE's service territory, to demonstrate the important characteristics of this new costing approach. The most significant result of this empirical work is that geographic differences in the cost of capacity in distribution systems can be as much as four times larger than the current system average utility estimates. Furthermore, lumpy capital investment patterns can lead to significant cost differences over time.

  14. Dynamics of bacterial communities before and after distribution in a full-scale drinking water network.

    PubMed

    El-Chakhtoura, Joline; Prest, Emmanuelle; Saikaly, Pascal; van Loosdrecht, Mark; Hammes, Frederik; Vrouwenvelder, Hans

    2015-05-01

    Understanding the biological stability of drinking water distribution systems is imperative in the framework of process control and risk management. The objective of this research was to examine the dynamics of the bacterial community during drinking water distribution at high temporal resolution. Water samples (156 in total) were collected over short time-scales (minutes/hours/days) from the outlet of a treatment plant and a location in its corresponding distribution network. The drinking water is treated by biofiltration and disinfectant residuals are absent during distribution. The community was analyzed by 16S rRNA gene pyrosequencing and flow cytometry as well as conventional, culture-based methods. Despite a random dramatic event (detected with pyrosequencing and flow cytometry but not with plate counts), the bacterial community profile at the two locations did not vary significantly over time. A diverse core microbiome was shared between the two locations (58-65% of the taxa and 86-91% of the sequences) and found to be dependent on the treatment strategy. The bacterial community structure changed during distribution, with greater richness detected in the network and phyla such as Acidobacteria and Gemmatimonadetes becoming abundant. The rare taxa displayed the highest dynamicity, causing the major change during water distribution. This change did not have hygienic implications and is contingent on the sensitivity of the applied methods. The concept of biological stability therefore needs to be revised. Biostability is generally desired in drinking water guidelines but may be difficult to achieve in large-scale complex distribution systems that are inherently dynamic. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Adaptive phase extraction: incorporating the Gabor transform in the matching pursuit algorithm.

    PubMed

    Wacker, Matthias; Witte, Herbert

    2011-10-01

    Short-time Fourier transform (STFT), Gabor transform (GT), wavelet transform (WT), and the Wigner-Ville distribution (WVD) are just some examples of time-frequency analysis methods which are frequently applied in biomedical signal analysis. However, all of these methods have their individual drawbacks. The STFT, GT, and WT have a time-frequency resolution that is determined by algorithm parameters and the WVD is contaminated by cross terms. In 1993, Mallat and Zhang introduced the matching pursuit (MP) algorithm that decomposes a signal into a sum of atoms and uses a cross-term free pseudo-WVD to generate a data-adaptive power distribution in the time-frequency space. Thus, it solved some of the problems of the GT and WT but lacks phase information that is crucial e.g., for synchronization analysis. We introduce a new time-frequency analysis method that combines the MP with a pseudo-GT. Therefore, the signal is decomposed into a set of Gabor atoms. Afterward, each atom is analyzed with a Gabor analysis, where the time-domain gaussian window of the analysis matches that of the specific atom envelope. A superposition of the single time-frequency planes gives the final result. This is the first time that a complete analysis of the complex time-frequency plane can be performed in a fully data-adaptive and frequency-selective manner. We demonstrate the capabilities of our approach on a simulation and on real-life magnetoencephalogram data.

  16. Light-Cone Effect of Radiation Fields in Cosmological Radiative Transfer Simulations

    NASA Astrophysics Data System (ADS)

    Ahn, Kyungjin

    2015-02-01

    We present a novel method to implement time-delayed propagation of radiation fields in cosmo-logical radiative transfer simulations. Time-delayed propagation of radiation fields requires construction of retarded-time fields by tracking the location and lifetime of radiation sources along the corresponding light-cones. Cosmological radiative transfer simulations have, until now, ignored this "light-cone effect" or implemented ray-tracing methods that are computationally demanding. We show that radiative trans-fer calculation of the time-delayed fields can be easily achieved in numerical simulations when periodic boundary conditions are used, by calculating the time-discretized retarded-time Green's function using the Fast Fourier Transform (FFT) method and convolving it with the source distribution. We also present a direct application of this method to the long-range radiation field of Lyman-Werner band photons, which is important in the high-redshift astrophysics with first stars.

  17. Design of Distributed Controllers Seeking Optimal Power Flow Solutions Under Communication Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  18. Design of Distributed Controllers Seeking Optimal Power Flow Solutions under Communication Constraints: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Simonetto, Andrea; Dhople, Sairaj

    This paper focuses on power distribution networks featuring inverter-interfaced distributed energy resources (DERs), and develops feedback controllers that drive the DER output powers to solutions of time-varying AC optimal power flow (OPF) problems. Control synthesis is grounded on primal-dual-type methods for regularized Lagrangian functions, as well as linear approximations of the AC power-flow equations. Convergence and OPF-solution-tracking capabilities are established while acknowledging: i) communication-packet losses, and ii) partial updates of control signals. The latter case is particularly relevant since it enables asynchronous operation of the controllers where DER setpoints are updated at a fast time scale based on local voltagemore » measurements, and information on the network state is utilized if and when available, based on communication constraints. As an application, the paper considers distribution systems with high photovoltaic integration, and demonstrates that the proposed framework provides fast voltage-regulation capabilities, while enabling the near real-time pursuit of solutions of AC OPF problems.« less

  19. Analysis of PV Advanced Inverter Functions and Setpoints under Time Series Simulation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seuss, John; Reno, Matthew J.; Broderick, Robert Joseph

    Utilities are increasingly concerned about the potential negative impacts distributed PV may have on the operational integrity of their distribution feeders. Some have proposed novel methods for controlling a PV system's grid - tie inverter to mitigate poten tial PV - induced problems. This report investigates the effectiveness of several of these PV advanced inverter controls on improving distribution feeder operational metrics. The controls are simulated on a large PV system interconnected at several locations within two realistic distribution feeder models. Due to the time - domain nature of the advanced inverter controls, quasi - static time series simulations aremore » performed under one week of representative variable irradiance and load data for each feeder. A para metric study is performed on each control type to determine how well certain measurable network metrics improve as a function of the control parameters. This methodology is used to determine appropriate advanced inverter settings for each location on the f eeder and overall for any interconnection location on the feeder.« less

  20. Cumulative slant path rain attenuation associated with COMSTAR beacon at 28.56 GHz for Wallops Island, Virginia

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1978-01-01

    Yearly, monthly, and time of day fade statistics are presented and characterized. A 19.04 GHz yearly fade distribution, corresponding to a second COMSTAR beacon frequency, is predicted using the concept of effective path length, disdrometer, and rain rate results. The yearly attenuation and rain rate distributions follow with good approximation log normal variations for most fade and rain rate levels. Attenuations were exceeded for the longest and shortest periods of times for all fades in August and February, respectively. The eight hour time period showing the maximum and minimum number of minutes over the year for which fades exceeded 12 db were approximately between 1600 to 2400, and 0400 to 1200 hours, respectively. In employing the predictive method for obtaining the 19.04 GHz fade distribution, it is demonstrated theoretically that the ratio of attenuations at two frequencies is minimally dependent of raindrop size distribution providing these frequencies are not widely separated.

Top