Influence of dispatching rules on average production lead time for multi-stage production systems.
Hübl, Alexander; Jodlbauer, Herbert; Altendorfer, Klaus
2013-08-01
In this paper the influence of different dispatching rules on the average production lead time is investigated. Two theorems based on covariance between processing time and production lead time are formulated and proved theoretically. Theorem 1 links the average production lead time to the "processing time weighted production lead time" for the multi-stage production systems analytically. The influence of different dispatching rules on average lead time, which is well known from simulation and empirical studies, can be proved theoretically in Theorem 2 for a single stage production system. A simulation study is conducted to gain more insight into the influence of dispatching rules on average production lead time in a multi-stage production system. We find that the "processing time weighted average production lead time" for a multi-stage production system is not invariant of the applied dispatching rule and can be used as a dispatching rule independent indicator for single-stage production systems.
Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Metzler, Ralf
2014-01-14
Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Kantz, Holger
2016-04-01
As we have one and only one earth and no replicas, climate characteristics are usually computed as time averages from a single time series. For understanding climate variability, it is essential to understand how close a single time average will typically be to an ensemble average. To answer this question, we study large deviation probabilities (LDP) of stochastic processes and characterize them by their dependence on the time window. In contrast to iid variables for which there exists an analytical expression for the rate function, the correlated variables such as auto-regressive (short memory) and auto-regressive fractionally integrated moving average (long memory) processes, have not an analytical LDP. We study LDP for these processes, in order to see how correlation affects this probability in comparison to iid data. Although short range correlations lead to a simple correction of sample size, long range correlations lead to a sub-exponential decay of LDP and hence to a very slow convergence of time averages. This effect is demonstrated for a 120 year long time series of daily temperature anomalies measured in Potsdam (Germany).
NASA Astrophysics Data System (ADS)
Akimoto, Takuma; Yamamoto, Eiji
2016-12-01
Local diffusion coefficients in disordered systems such as spin glass systems and living cells are highly heterogeneous and may change over time. Such a time-dependent and spatially heterogeneous environment results in irreproducibility of single-particle-tracking measurements. Irreproducibility of time-averaged observables has been theoretically studied in the context of weak ergodicity breaking in stochastic processes. Here, we provide rigorous descriptions of equilibrium and non-equilibrium diffusion processes for the annealed transit time model, which is a heterogeneous diffusion model in living cells. We give analytical solutions for the mean square displacement (MSD) and the relative standard deviation of the time-averaged MSD for equilibrium and non-equilibrium situations. We find that the time-averaged MSD grows linearly with time and that the time-averaged diffusion coefficients are intrinsically random (irreproducible) even in the long-time measurements in non-equilibrium situations. Furthermore, the distribution of the time-averaged diffusion coefficients converges to a universal distribution in the sense that it does not depend on initial conditions. Our findings pave the way for a theoretical understanding of distributional behavior of the time-averaged diffusion coefficients in disordered systems.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
A high-efficiency real-time digital signal averager for time-of-flight mass spectrometry.
Wang, Yinan; Xu, Hui; Li, Qingjiang; Li, Nan; Huang, Zhengxu; Zhou, Zhen; Liu, Husheng; Sun, Zhaolin; Xu, Xin; Yu, Hongqi; Liu, Haijun; Li, David D-U; Wang, Xi; Dong, Xiuzhen; Gao, Wei
2013-05-30
Analog-to-digital converter (ADC)-based acquisition systems are widely applied in time-of-flight mass spectrometers (TOFMS) due to their ability to record the signal intensity of all ions within the same pulse. However, the acquisition system raises the requirement for data throughput, along with increasing the conversion rate and resolution of the ADC. It is therefore of considerable interest to develop a high-performance real-time acquisition system, which can relieve the limitation of data throughput. We present in this work a high-efficiency real-time digital signal averager, consisting of a signal conditioner, a data conversion module and a signal processing module. Two optimization strategies are implemented using field programmable gate arrays (FPGAs) to enhance the efficiency of the real-time processing. A pipeline procedure is used to reduce the time consumption of the accumulation strategy. To realize continuous data transfer, a high-efficiency transmission strategy is developed, based on a ping-pong procedure. The digital signal averager features good responsiveness, analog bandwidth and dynamic performance. The optimal effective number of bits reaches 6.7 bits. For a 32 µs record length, the averager can realize 100% efficiency with an extraction frequency below 31.23 kHz by modifying the number of accumulation steps. In unit time, the averager yields superior signal-to-noise ratio (SNR) compared with data accumulation in a computer. The digital signal averager is combined with a vacuum ultraviolet single-photon ionization time-of-flight mass spectrometer (VUV-SPI-TOFMS). The efficiency of the real-time processing is tested by analyzing the volatile organic compounds (VOCs) from ordinary printed materials. In these experiments, 22 kinds of compounds are detected, and the dynamic range exceeds 3 orders of magnitude. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-06-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
NASA Astrophysics Data System (ADS)
Gao, Peng
2018-04-01
This work concerns the problem associated with averaging principle for a higher order nonlinear Schrödinger equation perturbed by a oscillating term arising as the solution of a stochastic reaction-diffusion equation evolving with respect to the fast time. This model can be translated into a multiscale stochastic partial differential equations. Stochastic averaging principle is a powerful tool for studying qualitative analysis of stochastic dynamical systems with different time-scales. To be more precise, under suitable conditions, we prove that there is a limit process in which the fast varying process is averaged out and the limit process which takes the form of the higher order nonlinear Schrödinger equation is an average with respect to the stationary measure of the fast varying process. Finally, by using the Khasminskii technique we can obtain the rate of strong convergence for the slow component towards the solution of the averaged equation, and as a consequence, the system can be reduced to a single higher order nonlinear Schrödinger equation with a modified coefficient.
T700 Blisk and Impeller Manufacturing Process Development Program.
1979-11-01
700(40) Reworked Media Temperature - 790 F Average Media Pressure - 200 psi Total Cycles (Test No. 1) - 40 Total Cycles (Test No. 2) - 20 Total Time...36A(73), - 700(40" Media Temperature - 80F Average Media Pressure - 150 psi Total Cycles - 52 Total Time - 95 Minutes Some difficulty was...61)-36A(73)-700( 4O)Media Temperature - 79OF average Media Pressure - 200 psi Time - 40 cycles - 72 minutes Time - 60 cycles - 109 minutes Time - 80
SU-F-T-99: Data Visualization From a Treatment Planning Tracking System for Radiation Oncology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cline, K; Kabat, C; Li, Y
2016-06-15
Purpose: A treatment planning process tracker database with input forms and a TV-viewable display webpage was developed and implemented in our clinic to collect time data points throughout the process. Tracking plan times is important because it directly affects the patient quality of care. Simply, the longer a patient waits after their initial simulation CT for treatment to begin, the more time the cancer has to progress. The tracker helps to drive workflow through the clinic, while the data collected can be used to understand and manage the process to find and eliminate inefficiencies. Methods: The overall process steps trackedmore » are CT-simulation, mark patient, draw normal contours, draw target volumes, create plan, and review/approve plan. Time stamps for task completion were extracted and used to generate a set of clinic metrics, among which include average time for each step in the process split apart by type of treatment, average time to completion for plans started in a given week, and individual overall completion time per plan. Results: Trends have been tracked for fourteen weeks of clinical data (196 plans). On average, drawing normal contours and target volumes is taking 2–5 times as long as creating the plan itself. This is potentially an issue because it could mean the process is taking too long initially, and it could be forcing the planning step to be done in a short amount of time. We also saw from our graphs that there appears to be no clear trend on the average amount of time per plan week-to-week. Conclusion: A tracker of this type has the potential to provide insight into how time is utilized in our clinic. By equipping our dosimetrists, radiation oncologists, and physicists with individualized metric sets, the tracker can help provide visibility and drive workflow. Funded in part by CPRIT (RP140105).« less
Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf
2017-01-01
We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.
NASA Astrophysics Data System (ADS)
Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf
2017-01-01
We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.
Transient aging in fractional Brownian and Langevin-equation motion.
Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf
2013-12-01
Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.
NASA Astrophysics Data System (ADS)
Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf
2015-04-01
Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.
Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf
2015-04-01
Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.
The fastest spreader in SIS epidemics on networks
NASA Astrophysics Data System (ADS)
He, Zhidong; Van Mieghem, Piet
2018-05-01
Identifying the fastest spreaders in epidemics on a network helps to ensure an efficient spreading. By ranking the average spreading time for different spreaders, we show that the fastest spreader may change with the effective infection rate of a SIS epidemic process, which means that the time-dependent influence of a node is usually strongly coupled to the dynamic process and the underlying network. With increasing effective infection rate, we illustrate that the fastest spreader changes from the node with the largest degree to the node with the shortest flooding time. (The flooding time is the minimum time needed to reach all other nodes if the process is reduced to a flooding process.) Furthermore, by taking the local topology around the spreader and the average flooding time into account, we propose the spreading efficiency as a metric to quantify the efficiency of a spreader and identify the fastest spreader, which is adaptive to different infection rates in general networks.
Nonequilibrium thermodynamic potentials for continuous-time Markov chains.
Verley, Gatien
2016-01-01
We connect the rare fluctuations of an equilibrium (EQ) process and the typical fluctuations of a nonequilibrium (NE) stationary process. In the framework of large deviation theory, this observation allows us to introduce NE thermodynamic potentials. For continuous-time Markov chains, we identify the relevant pairs of conjugated variables and propose two NE ensembles: one with fixed dynamics and fluctuating time-averaged variables, and another with fixed time-averaged variables, but a fluctuating dynamics. Accordingly, we show that NE processes are equivalent to conditioned EQ processes ensuring that NE potentials are Legendre dual. We find a variational principle satisfied by the NE potentials that reach their maximum in the NE stationary state and whose first derivatives produce the NE equations of state and second derivatives produce the NE Maxwell relations generalizing the Onsager reciprocity relations.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Measuring Skew in Average Surface Roughness as a Function of Surface Preparation
NASA Technical Reports Server (NTRS)
Stahl, Mark
2015-01-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
Variable diffusion in stock market fluctuations
NASA Astrophysics Data System (ADS)
Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.
2015-02-01
We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.
Davison, James A
2007-01-01
To compare the Legacy 20000 Advantec continuous and Infiniti hyperpulse modes (Alcon Laboratories, Fort Worth, TX) with respect to average power, machine-measured phacoemulsification time, total stopwatch real time spent within the phacoemulsification process, balanced salt solution (BSS) volume, and corneal endothelial cell density losses. A background study was done of consecutive patients operated on with the Legacy (n = 60) and Infiniti (n = 40) machines programmed with identical parameters and using the continuous mode only. A primary study of another set of consecutive cases was operated on using the Legacy (n = 87) and Infiniti (n = 94) with the same parameters, but using the hyperpulse mode during quadrant removal with the Infiniti. Measurements for each set included average power and phacoemulsification time with corneal endothelial cell densities, BSS volume, and time spent in the phacoemulsification process. Similarities were found in the background study for average power percent and average minutes of phacoemulsification time. In the primary study, similarities were found for total minutes in the phacoemulsification process, BSS usage, and ECD losses, and differences were found for average power percent (P< .001) and machine-measured phacoemulsification minutes (P< .001). The Legacy and Infiniti performed similarly in continuous mode. With the Infiniti hyperpulse mode, a total ultrasonic energy reduction of 66% was noted. The machines required the same amount of total stopwatch measured time to accomplish phacoemulsification and produced the same 5% corneal endothelial cell loss. Therefore, clinically, these two machines behave in a comparable manner relative to safety and effectiveness.
NASA Astrophysics Data System (ADS)
Xiaoxu, SUN; Jin, XU; Xingyu, LI
2017-12-01
In this paper dyeing waste water was simulated by reactive brilliant blue XBR, activated carbon adsorption process, coagulation process and chemical oxidation process were used to treat dyeing waste water. In activated carbon adsorption process and coagulation process, the water absorbance values were measured. The CODcr value of water was determined in Fenton chemical oxidation process. Then, the decolorization rate and COD removal rate were calculated respectively. The results showed that the optimum conditions of activated carbon adsorption process were as follows: pH=2, the dosage of activated carbon was 1.2g/L, the adsorption reaction time was 60 min, and the average decolorization rate of the three parallel experiments was 85.30%. The optimum conditions of coagulation experiment were as follows: pH=8~9, PAC dosage was 70mg/L, stirring time was 20min, standing time was 45min, the average decolorization rate of the three parallel experiments was 74.48%. The optimum conditions for Fenton oxidation were Fe2+ 0.05g/L, H2O2 (30%) 14mL/L, pH=3, reaction time 40min. The average CODcr removal rate was 69.35% in three parallel experiments. It can be seen that in the three methods the activated carbon adsorption treatment of dyeing wastewater was the best one.
Investigation of Cepstrum Analysis for Seismic/Acoustic Signal Sensor Range Determination.
1981-01-01
distorted by transmission through a linear system . For example, the effect of multipath and reverberation may be modeled in terms of a signal that is...called the short time averaged cepstrum. To derive some analytical expressions for short time average cepstrums we choose some functions of interest...linear process applied to the time series or any equivalent time function Repiod Period The amount of time required for one cycle of a time series Saphe
Measuring skew in average surface roughness as a function of surface preparation
NASA Astrophysics Data System (ADS)
Stahl, Mark T.
2015-08-01
Characterizing surface roughness is important for predicting optical performance. Better measurement of surface roughness reduces polishing time, saves money and allows the science requirements to be better defined. This study characterized statistics of average surface roughness as a function of polishing time. Average surface roughness was measured at 81 locations using a Zygo® white light interferometer at regular intervals during the polishing process. Each data set was fit to a normal and Largest Extreme Value (LEV) distribution; then tested for goodness of fit. We show that the skew in the average data changes as a function of polishing time.
Time scale bias in erosion rates of glaciated landscapes
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P.; Fischer, Woodward W.; Avouac, Jean-Philippe
2016-01-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time. PMID:27713925
Time scale bias in erosion rates of glaciated landscapes.
Ganti, Vamsi; von Hagke, Christoph; Scherler, Dirk; Lamb, Michael P; Fischer, Woodward W; Avouac, Jean-Philippe
2016-10-01
Deciphering erosion rates over geologic time is fundamental for understanding the interplay between climate, tectonic, and erosional processes. Existing techniques integrate erosion over different time scales, and direct comparison of such rates is routinely done in earth science. On the basis of a global compilation, we show that erosion rate estimates in glaciated landscapes may be affected by a systematic averaging bias that produces higher estimated erosion rates toward the present, which do not reflect straightforward changes in erosion rates through time. This trend can result from a heavy-tailed distribution of erosional hiatuses (that is, time periods where no or relatively slow erosion occurs). We argue that such a distribution can result from the intermittency of erosional processes in glaciated landscapes that are tightly coupled to climate variability from decadal to millennial time scales. In contrast, we find no evidence for a time scale bias in spatially averaged erosion rates of landscapes dominated by river incision. We discuss the implications of our findings in the context of the proposed coupling between climate and tectonics, and interpreting erosion rate estimates with different averaging time scales through geologic time.
Recent advances in phase shifted time averaging and stroboscopic interferometry
NASA Astrophysics Data System (ADS)
Styk, Adam; Józwik, Michał
2016-08-01
Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiser, Ryan; Dong, Changgui
Business process or “soft” costs account for well over 50% of the installed price of residential photovoltaic (PV) systems in the United States, so understanding these costs is crucial for identifying PV cost-reduction opportunities. Among these costs are those imposed by city-level permitting processes, which may add both expense and time to the PV development process. Building on previous research, this study evaluates the effect of city-level permitting processes on the installed price of residential PV systems and on the time required to develop and install those systems. The study uses a unique dataset from the U.S. Department of Energy’smore » Rooftop Solar Challenge Program, which includes city-level permitting process “scores,” plus data from the California Solar Initiative and the U.S. Census. Econometric methods are used to quantify the price and development-time effects of city-level permitting processes on more than 3,000 PV installations across 44 California cities in 2011. Results indicate that city-level permitting processes have a substantial and statistically significant effect on average installation prices and project development times. The results suggest that cities with the most favorable (i.e., highest-scoring) permitting practices can reduce average residential PV prices by $0.27–$0.77/W (4%–12% of median PV prices in California) compared with cities with the most onerous (i.e., lowest-scoring) permitting practices, depending on the regression model used. Though the empirical models for development times are less robust, results suggest that the most streamlined permitting practices may shorten development times by around 24 days on average (25% of the median development time). These findings illustrate the potential price and development-time benefits of streamlining local permitting procedures for PV systems.« less
Inhomogeneous diffusion and ergodicity breaking induced by global memory effects
NASA Astrophysics Data System (ADS)
Budini, Adrián A.
2016-11-01
We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.
Outpatient Waiting Time in Health Services and Teaching Hospitals: A Case Study in Iran
Mohebbifar, Rafat; Hasanpoor, Edris; Mohseni, Mohammad; Sokhanvar, Mobin; Khosravizadeh, Omid; Isfahani, Haleh Mousavi
2014-01-01
Background: One of the most important indexes of the health care quality is patient’s satisfaction and it takes place only when there is a process based on management. One of these processes in the health care organizations is the appropriate management of the waiting time process. The aim of this study is the systematic analyzing of the outpatient waiting time. Methods: This descriptive cross sectional study conducted in 2011 is an applicable study performed in the educational and health care hospitals of one of the medical universities located in the north west of Iran. Since the distributions of outpatients in all the months were equal, sampling stage was used. 160 outpatients were studied and the data was analyzed by using SPSS software. Results: Results of the study showed that the waiting time for the outpatients of ophthalmology clinic with an average of 245 minutes for each patient allocated the maximum time among the other clinics for itself. Orthopedic clinic had the minimal waiting time including an average of 77 minutes per patient. The total average waiting time for each patient in the educational hospitals under this study was about 161 minutes. Conclusion: by applying some models, we can reduce the waiting time especially in the realm of time and space before the admission to the examination room. Utilizing the models including the one before admission, electronic visit systems via internet, a process model, six sigma model, queuing theory model and FIFO model, are the components of the intervention that reduces the outpatient waiting time. PMID:24373277
Capillary Electrophoresis Sensitivity Enhancement Based on Adaptive Moving Average Method.
Drevinskas, Tomas; Telksnys, Laimutis; Maruška, Audrius; Gorbatsova, Jelena; Kaljurand, Mihkel
2018-06-05
In the present work, we demonstrate a novel approach to improve the sensitivity of the "out of lab" portable capillary electrophoretic measurements. Nowadays, many signal enhancement methods are (i) underused (nonoptimal), (ii) overused (distorts the data), or (iii) inapplicable in field-portable instrumentation because of a lack of computational power. The described innovative migration velocity-adaptive moving average method uses an optimal averaging window size and can be easily implemented with a microcontroller. The contactless conductivity detection was used as a model for the development of a signal processing method and the demonstration of its impact on the sensitivity. The frequency characteristics of the recorded electropherograms and peaks were clarified. Higher electrophoretic mobility analytes exhibit higher-frequency peaks, whereas lower electrophoretic mobility analytes exhibit lower-frequency peaks. On the basis of the obtained data, a migration velocity-adaptive moving average algorithm was created, adapted, and programmed into capillary electrophoresis data-processing software. Employing the developed algorithm, each data point is processed depending on a certain migration time of the analyte. Because of the implemented migration velocity-adaptive moving average method, the signal-to-noise ratio improved up to 11 times for sampling frequency of 4.6 Hz and up to 22 times for sampling frequency of 25 Hz. This paper could potentially be used as a methodological guideline for the development of new smoothing algorithms that require adaptive conditions in capillary electrophoresis and other separation methods.
Light propagation in the averaged universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagheri, Samae; Schwarz, Dominik J., E-mail: s_bagheri@physik.uni-bielefeld.de, E-mail: dschwarz@physik.uni-bielefeld.de
Cosmic structures determine how light propagates through the Universe and consequently must be taken into account in the interpretation of observations. In the standard cosmological model at the largest scales, such structures are either ignored or treated as small perturbations to an isotropic and homogeneous Universe. This isotropic and homogeneous model is commonly assumed to emerge from some averaging process at the largest scales. We assume that there exists an averaging procedure that preserves the causal structure of space-time. Based on that assumption, we study the effects of averaging the geometry of space-time and derive an averaged version of themore » null geodesic equation of motion. For the averaged geometry we then assume a flat Friedmann-Lemaître (FL) model and find that light propagation in this averaged FL model is not given by null geodesics of that model, but rather by a modified light propagation equation that contains an effective Hubble expansion rate, which differs from the Hubble rate of the averaged space-time.« less
Barnett, Adrian G; Graves, Nicholas; Clarke, Philip; Herbert, Danielle
2015-01-01
Objective To examine if streamlining a medical research funding application process saved time for applicants. Design Cross-sectional surveys before and after the streamlining. Setting The National Health and Medical Research Council (NHMRC) of Australia. Participants Researchers who submitted one or more NHMRC Project Grant applications in 2012 or 2014. Main outcome measures Average researcher time spent preparing an application and the total time for all applications in working days. Results The average time per application increased from 34 working days before streamlining (95% CI 33 to 35) to 38 working days after streamlining (95% CI 37 to 39; mean difference 4 days, bootstrap p value <0.001). The estimated total time spent by all researchers on applications after streamlining was 614 working years, a 67-year increase from before streamlining. Conclusions Streamlined applications were shorter but took longer to prepare on average. Researchers may be allocating a fixed amount of time to preparing funding applications based on their expected return, or may be increasing their time in response to increased competition. Many potentially productive years of researcher time are still being lost to preparing failed applications. PMID:25596201
Development of on-line laser power monitoring system
NASA Astrophysics Data System (ADS)
Ding, Chien-Fang; Lee, Meng-Shiou; Li, Kuan-Ming
2016-03-01
Since the laser was invented, laser has been applied in many fields such as material processing, communication, measurement, biomedical engineering, defense industries and etc. Laser power is an important parameter in laser material processing, i.e. laser cutting, and laser drilling. However, the laser power is easily affected by the environment temperature, we tend to monitor the laser power status, ensuring there is an effective material processing. Besides, the response time of current laser power meters is too long, they cannot measure laser power accurately in a short time. To be more precisely, we can know the status of laser power and help us to achieve an effective material processing at the same time. To monitor the laser power, this study utilize a CMOS (Complementary metal-oxide-semiconductor) camera to develop an on-line laser power monitoring system. The CMOS camera captures images of incident laser beam after it is split and attenuated by beam splitter and neutral density filter. By comparing the average brightness of the beam spots and measurement results from laser power meter, laser power can be estimated. Under continuous measuring mode, the average measuring error is about 3%, and the response time is at least 3.6 second shorter than thermopile power meters; under trigger measuring mode which enables the CMOS camera to synchronize with intermittent laser output, the average measuring error is less than 3%, and the shortest response time is 20 millisecond.
Determining average path length and average trapping time on generalized dual dendrimer
NASA Astrophysics Data System (ADS)
Li, Ling; Guan, Jihong
2015-03-01
Dendrimer has wide number of important applications in various fields. In some cases during transport or diffusion process, it transforms into its dual structure named Husimi cactus. In this paper, we study the structure properties and trapping problem on a family of generalized dual dendrimer with arbitrary coordination numbers. We first calculate exactly the average path length (APL) of the networks. The APL increases logarithmically with the network size, indicating that the networks exhibit a small-world effect. Then we determine the average trapping time (ATT) of the trapping process in two cases, i.e., the trap placed on a central node and the trap is uniformly distributed in all the nodes of the network. In both case, we obtain explicit solutions of ATT and show how they vary with the networks size. Besides, we also discuss the influence of the coordination number on trapping efficiency.
Encoding of sound envelope transients in the auditory cortex of juvenile rats and adult rats.
Lu, Qi; Jiang, Cuiping; Zhang, Jiping
2016-02-01
Accurate neural processing of time-varying sound amplitude and spectral information is vital for species-specific communication. During postnatal development, cortical processing of sound frequency undergoes progressive refinement; however, it is not clear whether cortical processing of sound envelope transients also undergoes age-related changes. We determined the dependence of neural response strength and first-spike latency on sound rise-fall time across sound levels in the primary auditory cortex (A1) of juvenile (P20-P30) rats and adult (8-10 weeks) rats. A1 neurons were categorized as "all-pass", "short-pass", or "mixed" ("all-pass" at high sound levels to "short-pass" at lower sound levels) based on the normalized response strength vs. rise-fall time functions across sound levels. The proportions of A1 neurons within each of the three categories in juvenile rats were similar to that in adult rats. In general, with increasing rise-fall time, the average response strength decreased and the average first-spike latency increased in A1 neurons of both groups. At a given sound level and rise-fall time, the average normalized neural response strength did not differ significantly between the two age groups. However, the A1 neurons in juvenile rats showed greater absolute response strength, longer first-spike latency compared to those in adult rats. In addition, at a constant sound level, the average first-spike latency of juvenile A1 neurons was more sensitive to changes in rise-fall time. Our results demonstrate the dependence of the responses of rat A1 neurons on sound rise-fall time, and suggest that the response latency exhibit some age-related changes in cortical representation of sound envelope rise time. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S
2016-01-01
Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
NASA Technical Reports Server (NTRS)
Hailperin, Max
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.
Large-deviation probabilities for correlated Gaussian processes and intermittent dynamical systems
NASA Astrophysics Data System (ADS)
Massah, Mozhdeh; Nicol, Matthew; Kantz, Holger
2018-05-01
In its classical version, the theory of large deviations makes quantitative statements about the probability of outliers when estimating time averages, if time series data are identically independently distributed. We study large-deviation probabilities (LDPs) for time averages in short- and long-range correlated Gaussian processes and show that long-range correlations lead to subexponential decay of LDPs. A particular deterministic intermittent map can, depending on a control parameter, also generate long-range correlated time series. We illustrate numerically, in agreement with the mathematical literature, that this type of intermittency leads to a power law decay of LDPs. The power law decay holds irrespective of whether the correlation time is finite or infinite, and hence irrespective of whether the central limit theorem applies or not.
The spreading time in SIS epidemics on networks
NASA Astrophysics Data System (ADS)
He, Zhidong; Van Mieghem, Piet
2018-03-01
In a Susceptible-Infected-Susceptible (SIS) process, we investigate the spreading time Tm, which is the time when the number of infected nodes in the metastable state is first reached, starting from the outbreak of the epidemics. We observe that the spreading time Tm resembles a lognormal-like distribution, though with different deep tails, both for the Markovian and the non-Markovian infection process, which implies that the spreading time can be very long with a relatively high probability. In addition, we show that a stronger virus, with a higher effective infection rate τ or an earlier timing of the infection attempts, does not always lead to a shorter average spreading time E [Tm ] . We numerically demonstrate that the average spreading time E [Tm ] in the complete graph and the star graph scales logarithmically as a function of the network size N for a fixed fraction of infected nodes in the metastable state.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Fuke, E-mail: wufuke@mail.hust.edu.cn; Tian, Tianhai, E-mail: tianhai.tian@sci.monash.edu.au; Rawlings, James B., E-mail: james.rawlings@wisc.edu
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in themore » work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.« less
Method for detection and correction of errors in speech pitch period estimates
NASA Technical Reports Server (NTRS)
Bhaskar, Udaya (Inventor)
1989-01-01
A method of detecting and correcting received values of a pitch period estimate of a speech signal for use in a speech coder or the like. An average is calculated of the nonzero values of received pitch period estimate since the previous reset. If a current pitch period estimate is within a range of 0.75 to 1.25 times the average, it is assumed correct, while if not, a correction process is carried out. If correction is required successively for more than a preset number of times, which will most likely occur when the speaker changes, the average is discarded and a new average calculated.
Liu, Weihua; Yang, Yi; Wang, Shuqing; Liu, Yang
2014-01-01
Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful.
Artifact Noise Removal Techniques on Seismocardiogram Using Two Tri-Axial Accelerometers
Luu, Loc; Dinh, Anh
2018-01-01
The aim of this study is on the investigation of motion noise removal techniques using two-accelerometer sensor system and various placements of the sensors on gentle movement and walking of the patients. A Wi-Fi based data acquisition system and a framework on Matlab are developed to collect and process data while the subjects are in motion. The tests include eight volunteers who have no record of heart disease. The walking and running data on the subjects are analyzed to find the minimal-noise bandwidth of the SCG signal. This bandwidth is used to design filters in the motion noise removal techniques and peak signal detection. There are two main techniques of combining signals from the two sensors to mitigate the motion artifact: analog processing and digital processing. The analog processing comprises analog circuits performing adding or subtracting functions and bandpass filter to remove artifact noises before entering the data acquisition system. The digital processing processes all the data using combinations of total acceleration and z-axis only acceleration. The two techniques are tested on three placements of accelerometer sensors including horizontal, vertical, and diagonal on gentle motion and walking. In general, the total acceleration and z-axis acceleration are the best techniques to deal with gentle motion on all sensor placements which improve average systolic signal-noise-ratio (SNR) around 2 times and average diastolic SNR around 3 times comparing to traditional methods using only one accelerometer. With walking motion, ADDER and z-axis acceleration are the best techniques on all placements of the sensors on the body which enhance about 7 times of average systolic SNR and about 11 times of average diastolic SNR comparing to only one accelerometer method. Among the sensor placements, the performance of horizontal placement of the sensors is outstanding comparing with other positions on all motions. PMID:29614821
Liu, Mei-bing; Chen, Xing-wei; Chen, Ying
2015-07-01
Identification of the critical source areas of non-point source pollution is an important means to control the non-point source pollution within the watershed. In order to further reveal the impact of multiple time scales on the spatial differentiation characteristics of non-point source nitrogen loss, a SWAT model of Shanmei Reservoir watershed was developed. Based on the simulation of total nitrogen (TN) loss intensity of all 38 subbasins, spatial distribution characteristics of nitrogen loss and critical source areas were analyzed at three time scales of yearly average, monthly average and rainstorms flood process, respectively. Furthermore, multiple linear correlation analysis was conducted to analyze the contribution of natural environment and anthropogenic disturbance on nitrogen loss. The results showed that there were significant spatial differences of TN loss in Shanmei Reservoir watershed at different time scales, and the spatial differentiation degree of nitrogen loss was in the order of monthly average > yearly average > rainstorms flood process. TN loss load mainly came from upland Taoxi subbasin, which was identified as the critical source area. At different time scales, land use types (such as farmland and forest) were always the dominant factor affecting the spatial distribution of nitrogen loss, while the effect of precipitation and runoff on the nitrogen loss was only taken in no fertilization month and several processes of storm flood at no fertilization date. This was mainly due to the significant spatial variation of land use and fertilization, as well as the low spatial variability of precipitation and runoff.
Sedlack, Jeffrey D
2010-01-01
Surgeons have been slow to incorporate industrial reliability techniques. Process control methods were applied to surgeon waiting time between cases, and to length of stay (LOS) after colon surgery. Waiting times between surgeries were evaluated by auditing the operating room records of a single hospital over a 1-month period. The medical records of 628 patients undergoing colon surgery over a 5-year period were reviewed. The average surgeon wait time between cases was 53 min, and the busiest surgeon spent 291/2 hr in 1 month waiting between surgeries. Process control charting demonstrated poor overall control of the room turnover process. Average LOS after colon resection also demonstrated very poor control. Mean LOS was 10 days. Weibull's conditional analysis revealed a conditional LOS of 9.83 days. Serious process management problems were identified in both analyses. These process issues are both expensive and adversely affect the quality of service offered by the institution. Process control mechanisms were suggested or implemented to improve these surgical processes. Industrial reliability and quality management tools can easily and effectively identify process control problems that occur on surgical services. © 2010 National Association for Healthcare Quality.
Barnett, Adrian G; Graves, Nicholas; Clarke, Philip; Herbert, Danielle
2015-01-16
To examine if streamlining a medical research funding application process saved time for applicants. Cross-sectional surveys before and after the streamlining. The National Health and Medical Research Council (NHMRC) of Australia. Researchers who submitted one or more NHMRC Project Grant applications in 2012 or 2014. Average researcher time spent preparing an application and the total time for all applications in working days. The average time per application increased from 34 working days before streamlining (95% CI 33 to 35) to 38 working days after streamlining (95% CI 37 to 39; mean difference 4 days, bootstrap p value <0.001). The estimated total time spent by all researchers on applications after streamlining was 614 working years, a 67-year increase from before streamlining. Streamlined applications were shorter but took longer to prepare on average. Researchers may be allocating a fixed amount of time to preparing funding applications based on their expected return, or may be increasing their time in response to increased competition. Many potentially productive years of researcher time are still being lost to preparing failed applications. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Defense Applications of Signal Processing
1999-08-27
class of multiscale autoregressive moving average (MARMA) processes. These are generalisations of ARMA models in time series analysis , and they contain...including the two theoretical sinusoidal components. Analysis of the amplitude and frequency time series provided some novel insight into the real...communication channels, underwater acoustic signals, radar systems , economic time series and biomedical signals [7]. The alpha stable (aS) distribution has
Speech Signal Processing Research. Appendices 1 thru 9
1975-12-01
is 2400 rpm for a maximum rotational latency of 25 ms and an average of 12.5 ms. The track to track access time is 12 ms, the average access time...in Table 1-3. Table 1-3. Capabilities and Limitations Description Characteristics Start-Up Time Operating Temperature Operating Humidity...Storage Conditions - - ■ ■ ■ -*****•******* ~40 seconds 0oC (320F) to +50oC (1220F) ambient 10% to 80% with no condensation Temperature =0oC(32oF) to
Neural pulse frequency modulation of an exponentially correlated Gaussian process
NASA Technical Reports Server (NTRS)
Hutchinson, C. E.; Chon, Y.-T.
1976-01-01
The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.
Kieran, Maríosa; Cleary, Mary; De Brún, Aoife; Igoe, Aileen
2017-10-01
To improve efficiency, reduce interruptions and reduce the time taken to complete oral drug rounds. Lean Six Sigma methods were applied to improve drug round efficiency using a pre- and post-intervention design. A 20-bed orthopaedic ward in a large teaching hospital in Ireland. Pharmacy, nursing and quality improvement staff. A multifaceted intervention was designed which included changes in processes related to drug trolley organization and drug supply planning. A communications campaign aimed at reducing interruptions during nurse-led during rounds was also developed and implemented. Average number of interruptions, average drug round time and variation in time taken to complete drug round. At baseline, the oral drug round took an average of 125 min. Following application of Lean Six Sigma methods, the average drug round time decreased by 51 min. The average number of interruptions per drug round reduced from an average of 12 at baseline to 11 following intervention, with a 75% reduction in drug supply interruptions. Lean Six Sigma methodology was successfully employed to reduce interruptions and to reduce time taken to complete the oral drug round. © The Author 2017. Published by Oxford University Press in association with the International Society for Quality in Health Care. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Submovement control processes in discrete aiming as a function of space-time constraints.
Hsieh, Tsung-Yu; Liu, Yeou-Teh; Newell, Karl M
2017-01-01
There is preliminary evidence that there are several types of submovements in movement aiming that reflect different processes of control and can result from particular task constraints. The purpose of the study was to investigate the effect of movement space and time task criteria on the prevalence of different submovement control characteristics in discrete aiming. Twelve participants completed 3 distance x 5 time conditions each with 100 trials in a target-aiming movement task. The kinematic structure of the trajectory determined the prevalence of 5 submovement types (none; pre-peak, post-peak movement velocity; undershoot, overshoot). The findings showed that the overall number of submovements increased in the slower space-time conditions and was predominantly characterized by post-peak trajectory submovements rather than discrete overshoot submovements. Overshoot submovements were more frequent in the high average movement velocity and short time duration conditions. We concluded that there are qualitatively different distributional patterns of submovement types in discrete aiming tasks that are organized by the quantitative scaling of the average movement velocity arising from multiple control processes to meet the specific space-time task constraints.
NASA Technical Reports Server (NTRS)
Hailperin, M.
1993-01-01
This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that the authors' techniques allow more accurate estimation of the global system loading, resulting in fewer object migrations than local methods. The authors' method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive load-balancing methods. Results from a preliminary analysis of another system and from simulation with a synthetic load provide some evidence of more general applicability.
Effect of closed-loop order processing on the time to initial antimicrobial therapy.
Panosh, Nicole; Rew, Richardd; Sharpe, Michelle
2012-08-15
The results of a study comparing the average time to initiation of i.v. antimicrobial therapy with closed-versus open-loop order entry and processing are reported. A retrospective cohort study was performed to compare order-to-administration times for initial doses of i.v. antimicrobials before and after a closed-loop order-processing system including computerized prescriber order entry (CPOE) was implemented at a large medical center. A total of 741 i.v. antimicrobial administrations to adult patients during designated five-month preimplementation and postimplementation study periods were assessed. Drug-use reports generated by the pharmacy database were used to identify order-entry times, and medication administration records were reviewed to determine times of i.v. antimicrobial administration. The mean ± S.D. order-to-administration times before and after the implementation of the CPOE system and closed-loop order processing were 3.18 ± 2.60 and 2.00 ± 1.89 hours, respectively, a reduction of 1.18 hours (p < 0.0001). Closed-loop order processing was associated with significant reductions in the average time to initiation of i.v. therapy in all patient care areas evaluated (cardiology, general medicine, and oncology). The study results suggest that CPOE-based closed-loop order processing can play an important role in achieving compliance with current practice guidelines calling for increased efforts to ensure the prompt initiation of i.v. antimicrobials for severe infections (e.g., sepsis, meningitis). Implementation of a closed-loop order-processing system resulted in a significant decrease in order-to-administration times for i.v. antimicrobial therapy.
Distribution of tunnelling times for quantum electron transport.
Rudge, Samuel L; Kosov, Daniel S
2016-03-28
In electron transport, the tunnelling time is the time taken for an electron to tunnel out of a system after it has tunnelled in. We define the tunnelling time distribution for quantum processes in a dissipative environment and develop a practical approach for calculating it, where the environment is described by the general Markovian master equation. We illustrate the theory by using the rate equation to compute the tunnelling time distribution for electron transport through a molecular junction. The tunnelling time distribution is exponential, which indicates that Markovian quantum tunnelling is a Poissonian statistical process. The tunnelling time distribution is used not only to study the quantum statistics of tunnelling along the average electric current but also to analyse extreme quantum events where an electron jumps against the applied voltage bias. The average tunnelling time shows distinctly different temperature dependence for p- and n-type molecular junctions and therefore provides a sensitive tool to probe the alignment of molecular orbitals relative to the electrode Fermi energy.
Varughese, Anna M; Hagerman, Nancy; Townsend, Mari E
2013-07-01
The anesthesia preoperative screening and evaluation of a patient prior to surgery is a critical element in the safe and effective delivery of anesthesia care. In this era of increased focus on cost containment, many anesthesia practices are looking for ways to maximize productivity while maintaining the quality of the preoperative evaluation process by harnessing and optimizing all available resources. We sought to develop a Nurse Practitioner-assisted Preoperative Anesthesia Screening process using quality improvement methods with the goal of maintaining the quality of the screening process, while at the same time redirecting anesthesiologists time for the provision of nonoperating room (OR) anesthesia. The Nurse practitioner (NP) time (approximately 10 h per week) directed to this project was gained as a result of an earlier resource utilization improvement project within the Department of Anesthesia. The goal of this improvement project was to increase the proportion of patient anesthesia screens conducted by NPs to 50% within 6 months. After discussion with key stakeholders of the process, a multidisciplinary improvement team identified a set of operational factors (key drivers) believed to be important to the success of the preoperative anesthesia screening process. These included the development of dedicated NP time for daily screening, NP competency and confidence with the screening process, effective mentoring by anesthesiologists, standardization of screening process, and communication with stakeholders of the process, that is, surgeons. These key drivers focused on the development of several interventions such as (i) NP education in the preoperative anesthesia screening for consultation process by a series of didactic lectures conducted by anesthesiologists, and NP's shadowing an anesthesiologist during the screening process, (ii) Anesthesiologist mentoring and assessment of NP screenings using the dual screening process whereby both anesthesiologists and NP conducted the screening process independently and results were compared and discussed, (iii) Examination and re-adjustment of NP schedules to provide time for daily screening while preserving other responsibilities, and (iv) Standardization through the development of guidelines for the preoperative screening process. Measures recorded included the percentage of patient anesthesia screens conducted by NP, the percentage of dual screens with MD and NP agreement regarding the screening decision, and the average times taken for the anesthesiologist and NP screening process. After implementation of these interventions, the percentage of successful NP-assisted anesthesia consultation screenings increased from 0% to 65% over a period of 6 months. The Anesthesiologists' time redirected to non-OR anesthesia averaged at least 8 h a week. The percentage of dual screens with agreement on the screening decision was 96% (goal >95%). The overall average time taken for a NP screen was 8.2 min vs 4.5 min for an anesthesiologist screen. The overall average operating room delays and cancelations for cases on the day of surgery remained the same. By applying quality improvement methods, we identified key drivers for the institution of an NP-assisted preoperative screening process and successfully implemented this process while redirecting anesthesiologists' time for the provision of non-OR anesthesia. This project was instrumental in improving the matching of provider skills with clinical need while maintaining superior outcomes at the lowest possible cost. © 2013 John Wiley & Sons Ltd.
Yang, Yi; Wang, Shuqing; Liu, Yang
2014-01-01
Order insertion often occurs in the scheduling process of logistics service supply chain (LSSC), which disturbs normal time scheduling especially in the environment of mass customization logistics service. This study analyses order similarity coefficient and order insertion operation process and then establishes an order insertion scheduling model of LSSC with service capacity and time factors considered. This model aims to minimize the average unit volume operation cost of logistics service integrator and maximize the average satisfaction degree of functional logistics service providers. In order to verify the viability and effectiveness of our model, a specific example is numerically analyzed. Some interesting conclusions are obtained. First, along with the increase of completion time delay coefficient permitted by customers, the possible inserting order volume first increases and then trends to be stable. Second, supply chain performance reaches the best when the volume of inserting order is equal to the surplus volume of the normal operation capacity in mass service process. Third, the larger the normal operation capacity in mass service process is, the bigger the possible inserting order's volume will be. Moreover, compared to increasing the completion time delay coefficient, improving the normal operation capacity of mass service process is more useful. PMID:25276851
Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA
NASA Astrophysics Data System (ADS)
Montillet, Jean-Philippe; Yu, Kegen
2015-04-01
Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).
NASA Astrophysics Data System (ADS)
Rehman, Zeeshan Ur; Koo, Bon Heun
2016-08-01
In this study, protective ceramic coatings were prepared on AZ91D magnesium alloy by plasma electrolytic oxidation (PEO) to improve the corrosion and mechanical properties of AZ91D magnesium alloy. The process was conducted in silicate-fluoride-based electrolyte solution. It was found that the average micro-hardness of the coating was significantly increased with an increase in the PEO processing time. The highest value of the average micro-hardness ~1271.2 HV was recorded for 60-min processing time. The phase analysis of the coatings indicated that they were mainly composed of Mg2SiO4, MgO, and MgF2 phases. The surface and cross-sectional study demonstrated that porosity was largely reduced with processing time, together with the change in pore geometry from irregular to spherical shape. The results of the polarization test in 3.5 wt.% NaCl solution revealed that aggressive corrosion took place for 5-min sample; however, the corrosion current was noticeably decreased to 0.43 × 10-7 A/cm2 for the 60-min-coated sample. The superior nobility and hardness for long processing time are suggested to be due to the dense and highly thick coating, coupled with the presence of MgF2 phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Mohammad; Salloum, Maher; Lee, Jina
2017-07-10
KARMA4 is a C++ library for autoregressive moving average (ARMA) modeling and forecasting of time-series data while incorporating both process and observation error. KARMA4 is designed for fitting and forecasting of time-series data for predictive purposes.
2017-06-06
OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...Geophysical Mapping, Electromagnetic Induction, Instrument Verification Strip, Time Domain Electromagnetic, Unexploded Ordnance 16. SECURITY...Munitions Response QA Quality Assurance QC Quality Control ROC Receiver Operating Characteristic RTK Real- time Kinematic s Second SNR
Random Process Simulation for stochastic fatigue analysis. Ph.D. Thesis - Rice Univ., Houston, Tex.
NASA Technical Reports Server (NTRS)
Larsen, Curtis E.
1988-01-01
A simulation technique is described which directly synthesizes the extrema of a random process and is more efficient than the Gaussian simulation method. Such a technique is particularly useful in stochastic fatigue analysis because the required stress range moment E(R sup m), is a function only of the extrema of the random stress process. The family of autoregressive moving average (ARMA) models is reviewed and an autoregressive model is presented for modeling the extrema of any random process which has a unimodal power spectral density (psd). The proposed autoregressive technique is found to produce rainflow stress range moments which compare favorably with those computed by the Gaussian technique and to average 11.7 times faster than the Gaussian technique. The autoregressive technique is also adapted for processes having bimodal psd's. The adaptation involves using two autoregressive processes to simulate the extrema due to each mode and the superposition of these two extrema sequences. The proposed autoregressive superposition technique is 9 to 13 times faster than the Gaussian technique and produces comparable values for E(R sup m) for bimodal psd's having the frequency of one mode at least 2.5 times that of the other mode.
NASA Astrophysics Data System (ADS)
Rajabi, F.; Battiato, I.
2016-12-01
Long term predictions of the impact of anthropogenic stressors on the environment is essential to reduce the risks associated with processes such as CO2 sequestration and nuclear waste storage in the subsurface. On the other hand, transient forcing factors (e.g. time-varying injection or pumping rate) with evolving heterogeneity of time scales spanning from days to years can influence transport phenomena at the pore scale. A comprehensive spatio-temporal prediction of reactive transport in porous media under time-dependent forcing factors for thousands of years requires the formulation of continuum scale models for time-averages. Yet, as every macroscopic model, time-averaged models can loose predictivity and accuracy when certain conditions are violated. This is true whenever lack of temporal and spatial scale separation occurs and it makes the continuum scale equation a poor assumption for the processes at the pore scale. In this work, we consider mass transport of a dissolved species undergoing a heterogeneous reaction and subject to time-varying boundary conditions in a periodic porous medium. By means of homogenization method and asymptotic expansion technique, we derive a macro-time continuum-scale equation as well as expressions for its effective properties. Our analysis demonstrates that the dynamics at the macro-scale is strongly influenced by the interplay between signal frequency at the boundary and transport processes at the pore level. In addition, we provide the conditions under which the space-time averaged equations accurately describe pore-scale processes. To validate our theoretical predictions, we consider a thin fracture with reacting walls and transient boundary conditions at the inlet. Our analysis shows a good agreement between numerical simulations and theoretical predictions. Furthermore, our numerical experiments show that mixing patterns of the contaminant plumes at the pore level strongly depend on the signal frequency.
Decision-aided ICI mitigation with time-domain average approximation in CO-OFDM
NASA Astrophysics Data System (ADS)
Ren, Hongliang; Cai, Jiaxing; Ye, Xin; Lu, Jin; Cao, Quanjun; Guo, Shuqin; Xue, Lin-lin; Qin, Yali; Hu, Weisheng
2015-07-01
We introduce and investigate the feasibility of a novel iterative blind phase noise inter-carrier interference (ICI) mitigation scheme for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. The ICI mitigation scheme is performed through the combination of frequency-domain symbol decision-aided estimation and the ICI phase noise time-average approximation. An additional initial decision process with suitable threshold is introduced in order to suppress the decision error symbols. Our proposed ICI mitigation scheme is proved to be effective in removing the ICI for a simulated CO-OFDM with 16-QAM modulation format. With the slightly high computational complexity, it outperforms the time-domain average blind ICI (Avg-BL-ICI) algorithm at a relatively wide laser line-width and high OSNR.
Optimizing radiologist e-prescribing of CT oral contrast agent using a protocoling portal.
Wasser, Elliot J; Galante, Nicholas J; Andriole, Katherine P; Farkas, Cameron; Khorasani, Ramin
2013-12-01
The purpose of this study is to quantify the time expenditure associated with radiologist ordering of CT oral contrast media when using an integrated protocoling portal and to determine radiologists' perceptions of the ordering process. This prospective study was performed at a large academic tertiary care facility. Detailed timing information for CT inpatient oral contrast orders placed via the computerized physician order entry (CPOE) system was gathered over a 14-day period. Analyses evaluated the amount of physician time required for each component of the ordering process. Radiologists' perceptions of the ordering process were assessed by survey. Descriptive statistics and chi-square analysis were performed. A total of 96 oral contrast agent orders were placed by 13 radiologists during the study period. The average time necessary to create a protocol for each case was 40.4 seconds (average range by subject, 20.0-130.0 seconds; SD, 37.1 seconds), and the average total time to create and sign each contrast agent order was 27.2 seconds (range, 10.0-50.0 seconds; SD, 22.4 seconds). Overall, 52.5% (21/40) of survey respondents indicated that radiologist entry of oral contrast agent orders improved patient safety. A minority of respondents (15% [6/40]) indicated that contrast agent order entry was either very or extremely disruptive to workflow. Radiologist e-prescribing of CT oral contrast agents using CPOE can be embedded in a protocol workflow. Integration of health IT tools can help to optimize user acceptance and adoption.
Erosion, sedimentation, and cumulative effects in the Northern Rocky Mountains
Walter F. Megahan; John G. King
2004-01-01
Erosion and sedimentation are natural geomorphic processes characterized by large temporal and spatial variability. Recent radionuclide studies suggest that rare episodic events, such as large wildfires, produce massive sediment yields over time scales of thousands of years, thereby causing long-term average sediment production to exceed present-day average erosion...
Trujillo, Logan T.; Jankowitsch, Jessica M.; Langlois, Judith H.
2014-01-01
Multiple studies show that people prefer attractive over unattractive faces. But what is an attractive face and why is it preferred? Averageness theory claims that faces are perceived as attractive when their facial configuration approximates the mathematical average facial configuration of the population. Conversely, faces that deviate from this average configuration are perceived as unattractive. The theory predicts that both attractive and mathematically averaged faces should be processed more fluently than unattractive faces, whereas the averaged faces should be processed marginally more fluently than the attractive faces. We compared neurocognitive and behavioral responses to attractive, unattractive, and averaged human faces to test these predictions. We recorded event-related potentials (ERPs) and reaction times (RTs) from 48 adults while they discriminated between human and chimpanzee faces. Participants categorized averaged and high attractive faces as “human” faster than low attractive faces. The posterior N170 (150 – 225 ms) face-evoked ERP component was smaller in response to high attractive and averaged faces versus low attractive faces. Single-trial EEG analysis indicated that this reduced ERP response arose from the engagement of fewer neural resources and not from a change in the temporal consistency of how those resources were engaged. These findings provide novel evidence that faces are perceived as attractive when they approximate a facial configuration close to the population average and suggest that processing fluency underlies preferences for attractive faces. PMID:24326966
Hashtjin, Adel Mirmajidi; Abbasi, Soleiman
2015-05-01
The aim of the present study was to investigate the influence of emulsifying conditions on some physical and rheological properties of orange peel essential oil (OPEO) in water nanoemulsions. In this regard, using the response surface methodology, the influence of ultrasonication conditions including sonication amplitude (70-100 %), sonication time (90-150 s) and process temperature (5-45 °C) on the mean droplets diameter (Z-average value), polydispersity index (PDI), and viscosity of the OPEO nanoemulsions was evaluated. In addition, the flow behavior and stability of selected nanoemulsions was evaluated during storage (up to 3 months) at different temperatures (5, 25 and 45 °C). Based on the results of the optimization, the optimum conditions for producing OPEO nanoemulsions (Z-average value 18.16 nm) were determined as 94 % (sonication amplitude), 138 s (sonication time) and 37 °C (process temperature). Moreover, analysis of variance (ANOVA) showed high coefficients of determination values (R (2) > 0.95) for the response surface models of the energy input and Z-average. In addition, the flow behavior of produced nanoemulsions was Newtonian, and the effect of time and storage temperature as well as their interactions on the Z-average value was highly significant (P < 0.0001).
Radial basis function network learns ceramic processing and predicts related strength and density
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.
1993-01-01
Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.
NASA Astrophysics Data System (ADS)
Ha, Jong M.; Youn, Byeng D.; Oh, Hyunseok; Han, Bongtae; Jung, Yoongho; Park, Jungho
2016-03-01
We propose autocorrelation-based time synchronous averaging (ATSA) to cope with the challenges associated with the current practice of time synchronous averaging (TSA) for planet gears in planetary gearboxes of wind turbine (WT). An autocorrelation function that represents physical interactions between the ring, sun, and planet gears in the gearbox is utilized to define the optimal shape and range of the window function for TSA using actual kinetic responses. The proposed ATSA offers two distinctive features: (1) data-efficient TSA processing and (2) prevention of signal distortion during the TSA process. It is thus expected that an order analysis with the ATSA signals significantly improves the efficiency and accuracy in fault diagnostics of planet gears in planetary gearboxes. Two case studies are presented to demonstrate the effectiveness of the proposed method: an analytical signal from a simulation and a signal measured from a 2 kW WT testbed. It can be concluded from the results that the proposed method outperforms conventional TSA methods in condition monitoring of the planetary gearbox when the amount of available stationary data is limited.
NASA Astrophysics Data System (ADS)
Wang, Limin; Shen, Yiteng; Yu, Jingxian; Li, Ping; Zhang, Ridong; Gao, Furong
2018-01-01
In order to cope with system disturbances in multi-phase batch processes with different dimensions, a hybrid robust control scheme of iterative learning control combined with feedback control is proposed in this paper. First, with a hybrid iterative learning control law designed by introducing the state error, the tracking error and the extended information, the multi-phase batch process is converted into a two-dimensional Fornasini-Marchesini (2D-FM) switched system with different dimensions. Second, a switching signal is designed using the average dwell-time method integrated with the related switching conditions to give sufficient conditions ensuring stable running for the system. Finally, the minimum running time of the subsystems and the control law gains are calculated by solving the linear matrix inequalities. Meanwhile, a compound 2D controller with robust performance is obtained, which includes a robust extended feedback control for ensuring the steady-state tracking error to converge rapidly. The application on an injection molding process displays the effectiveness and superiority of the proposed strategy.
Fast EEG spike detection via eigenvalue analysis and clustering of spatial amplitude distribution
NASA Astrophysics Data System (ADS)
Fukami, Tadanori; Shimada, Takamasa; Ishikawa, Bunnoshin
2018-06-01
Objective. In the current study, we tested a proposed method for fast spike detection in electroencephalography (EEG). Approach. We performed eigenvalue analysis in two-dimensional space spanned by gradients calculated from two neighboring samples to detect high-amplitude negative peaks. We extracted the spike candidates by imposing restrictions on parameters regarding spike shape and eigenvalues reflecting detection characteristics of individual medical doctors. We subsequently performed clustering, classifying detected peaks by considering the amplitude distribution at 19 scalp electrodes. Clusters with a small number of candidates were excluded. We then defined a score for eliminating spike candidates for which the pattern of detected electrodes differed from the overall pattern in a cluster. Spikes were detected by setting the score threshold. Main results. Based on visual inspection by a psychiatrist experienced in EEG, we evaluated the proposed method using two statistical measures of precision and recall with respect to detection performance. We found that precision and recall exhibited a trade-off relationship. The average recall value was 0.708 in eight subjects with the score threshold that maximized the F-measure, with 58.6 ± 36.2 spikes per subject. Under this condition, the average precision was 0.390, corresponding to a false positive rate 2.09 times higher than the true positive rate. Analysis of the required processing time revealed that, using a general-purpose computer, our method could be used to perform spike detection in 12.1% of the recording time. The process of narrowing down spike candidates based on shape occupied most of the processing time. Significance. Although the average recall value was comparable with that of other studies, the proposed method significantly shortened the processing time.
Real time microcontroller implementation of an adaptive myoelectric filter.
Bagwell, P J; Chappell, P H
1995-03-01
This paper describes a real time digital adaptive filter for processing myoelectric signals. The filter time constant is automatically selected by the adaptation algorithm, giving a significant improvement over linear filters for estimating the muscle force and controlling a prosthetic device. Interference from mains sources often produces problems for myoelectric processing, and so 50 Hz and all harmonic frequencies are reduced by an averaging filter and differential process. This makes practical electrode placement and contact less critical and time consuming. An economic real time implementation is essential for a prosthetic controller, and this is achieved using an Intel 80C196KC microcontroller.
Low-complexity image processing for real-time detection of neonatal clonic seizures.
Ntonfo, Guy Mathurin Kouamou; Ferrari, Gianluigi; Raheli, Riccardo; Pisani, Francesco
2012-05-01
In this paper, we consider a novel low-complexity real-time image-processing-based approach to the detection of neonatal clonic seizures. Our approach is based on the extraction, from a video of a newborn, of an average luminance signal representative of the body movements. Since clonic seizures are characterized by periodic movements of parts of the body (e.g., the limbs), by evaluating the periodicity of the extracted average luminance signal it is possible to detect the presence of a clonic seizure. The periodicity is investigated, through a hybrid autocorrelation-Yin estimation technique, on a per-window basis, where a time window is defined as a sequence of consecutive video frames. While processing is first carried out on a single window basis, we extend our approach to interlaced windows. The performance of the proposed detection algorithm is investigated, in terms of sensitivity and specificity, through receiver operating characteristic curves, considering video recordings of newborns affected by neonatal seizures.
Real-time aerosol black carbon (BC) data, presented at time resolutions on the order of seconds to minutes, is desirable in field and source characterization studies measuring rapidly varying concentrations of BC. The Optimized Noise-reduction Averaging (ONA) algorithm has been d...
Modified Exponential Weighted Moving Average (EWMA) Control Chart on Autocorrelation Data
NASA Astrophysics Data System (ADS)
Herdiani, Erna Tri; Fandrilla, Geysa; Sunusi, Nurtiti
2018-03-01
In general, observations of the statistical process control are assumed to be mutually independence. However, this assumption is often violated in practice. Consequently, statistical process controls were developed for interrelated processes, including Shewhart, Cumulative Sum (CUSUM), and exponentially weighted moving average (EWMA) control charts in the data that were autocorrelation. One researcher stated that this chart is not suitable if the same control limits are used in the case of independent variables. For this reason, it is necessary to apply the time series model in building the control chart. A classical control chart for independent variables is usually applied to residual processes. This procedure is permitted provided that residuals are independent. In 1978, Shewhart modification for the autoregressive process was introduced by using the distance between the sample mean and the target value compared to the standard deviation of the autocorrelation process. In this paper we will examine the mean of EWMA for autocorrelation process derived from Montgomery and Patel. Performance to be investigated was investigated by examining Average Run Length (ARL) based on the Markov Chain Method.
Epstein, Nancy E
2015-01-01
Typically, fibrin sealants (FSs) and fibrin glues (FGs) are used to strengthen dural repairs during spinal surgery. In 2014, Epstein demonstrated that one FS/FG, Tisseel (Baxter International Inc., Westlake Village, CA, USA) equalized the average times to drain removal and length of stay (LOS) for patients with versus without excess bleeding (e.g. who did not receive Tisseel) undergoing multilevel laminectomies with 1-2 level noninstrumented fusions (LamF).[6]. Here Tisseel was utilized to promote hemostasis for two populations; 39 patients undergoing average 4.4 level lumbar laminectomies with average 1.3 level noninstrumented fusions (LamF), and 48 patients undergoing average 4.0 level laminectomies alone (Lam). We compared the average operative time, estimated blood loss (EBL), postoperative drainage, LOS, and transfusion requirements for the LamF versus Lam groups. The average operative times, EBL, postoperative drainage, LOS, and transfusion requirements were all greater for LamF versus Lam patients; operative times (4.1 vs. 3.0 h), average EBL (192.3 vs. 147.9 cc), drainage (e.g. day 1; 199.6 vs. 167.4 cc; day 2; 172.9 vs. 63.9 cc), average LOS (4.6 vs. 2.5 days), and transfusion requirements (11 LamF patients; 18 Units [U] RBC versus 2 Lam patients; 3 U RBC). Utilizing Tisseel to facilitate hemostasis in LamF versus Lam still resulted in greater operative times, EBL, postoperative average drainage, LOS, and transfusion requirements for patients undergoing the noninstrumented fusions. Although Tisseel decreases back bleeding within the spinal canal, it does not reduce blood loss from LamF decorticated transverse processes.
Lugina, K. M. [Department of Geography, St. Petersburg State University, St. Petersburg, Russia; Groisman, P. Ya. [National Climatic Data Center, Asheville, North Carolina USA); Vinnikov, K. Ya. [Department of Atmospheric Sciences, University of Maryland, College Park, Maryland (USA); Koknaeva, V. V. [State Hydrological Institute, St. Petersburg, Russia; Speranskaya, N. A. [State Hydrological Institute, St. Petersburg, Russia
2006-01-01
The mean monthly and annual values of surface air temperature compiled by Lugina et al. have been taken mainly from the World Weather Records, Monthly Climatic Data for the World, and Meteorological Data for Individual Years over the Northern Hemisphere Excluding the USSR. These published records were supplemented with information from different national publications. In the original archive, after removal of station records believed to be nonhomogeneous or biased, 301 and 265 stations were used to determine the mean temperature for the Northern and Southern hemispheres, respectively. The new version of the station temperature archive (used for evaluation of the zonally-averaged temperatures) was created in 1995. The change to the archive was required because data from some stations became unavailable for analyses in the 1990s. During this process, special care was taken to secure homogeneity of zonally averaged time series. When a station (or a group of stations) stopped reporting, a "new" station (or group of stations) was selected in the same region, and its data for the past 50 years were collected and added to the archive. The processing (area-averaging) was organized in such a way that each time series from a new station spans the reference period (1951-1975) and the years thereafter. It was determined that the addition of the new stations had essentially no effect on the zonally-averaged values for the pre-1990 period.
Aspects of Motor Performance and Preacademic Learning.
ERIC Educational Resources Information Center
Feder, Katya; Kerr, Robert
1996-01-01
The Miller Assessment for Preschoolers (MAP) and a number/counting test were given to 50 4- and 5-year-olds. Low performance on counting was related to significantly slower average response time, overshoot movement time, and reaction time, indicating perceptual-motor difficulty. Low MAP scores indicated difficulty processing visual spatial…
MO-F-CAMPUS-T-02: An Electronic Whiteboard Platform to Manage Treatment Planning Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiCostanzo, D; Woollard, J; Gupta, N
2015-06-15
Purpose: In an effort to improve patient safety and streamline the radiotherapy treatment planning (TP) process, a software based whiteboard had been developed and put in use in our facility Methods: The electronic whiteboard developed using SQL database (DB) and PHP/JavaScript based web interface, is published via department intranet and login credentials. The DB stores data for each TP process such as patient information, plan type, simulation/start dates, physician, dosimetrist, QA and the current status in planning process. Users interact with the DB per plan and perform status updates in real time as the planning process progresses. All user interactionsmore » with the DB are recorded with timestamps so as to calculate statistical information for TP process management such as contouring times, planning and review times, dosimetry, physics and therapist QA times. External beam and brachytherapy plans are categorized according to complexity (ex: IMRT, 3D, HDR, LDR etc) and treatment types and applicators. Each plan category is assigned specific timelines for each planning process. When a plan approaches or passes the predetermined timeline, users are alerted via color coded graphical cues. When certain process items are not completed in time, pre-determined actions are triggered such as a delay in treatment start date. Results: Our institution has been using the electronic whiteboard for two years. Implementation of pre-determined actions based on the statistical information collected by the whiteboard improved our TP process. For example, the average time for normal tissue contouring decreased from 0.73±1.37 to 0.24±0.33 days. The average time for target volume contouring decreased from 3.2±2.84 to 2.37±2.54 days. This increase in efficiency allows more time for quality assurance processes, improving patient safety. Conclusion: The electronic whiteboard has been an invaluable tool for streamlining our TP processes. It facilitates timely and accurate communication between all parties involved in the TP process increasing patient safety.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Vents, Storage Vessels, Transfer Operations, and Wastewater § 63.117 Process vent provisions—reporting... incinerators, boilers or process heaters specified in table 3 of this subpart, and averaged over the same time... content determinations, flow rate measurements, and exit velocity determinations made during the...
Stochastic Adaptive Estimation and Control.
1994-10-26
Marcus, "Language Stability and Stabilizability of Discrete Event Dynamical Systems ," SIAM Journal on Control and Optimization, 31, September 1993...in the hierarchical control of flexible manufacturing systems ; in this problem, the model involves a hybrid process in continuous time whose state is...of the average cost control problem for discrete- time Markov processes. Our exposition covers from finite to Borel state and action spaces and
High-Speed, capacitance-based tip clearance sensing
NASA Astrophysics Data System (ADS)
Haase, W. C.; Haase, Z. S.
This paper discusses recent advances in tip clearance measurement systems for turbine engines using capacitive probes. Real time measurements of individual blade pulses are generated using wideband signal processing providing 3 dB bandwidths of typically 5 MHz. Subsequent mixed-signal processing circuitry provide real-time measurements of maximum, minimum, and average clearance with latencies of one blade-to-blade time interval. Both guarded and unguarded probe configurations are possible with the system. Calibration techniques provide high accuracy measurements.
Billings, Kathleen R; Hajduk, John; Rose, Allison; De Oliveira, Gildasio S; Suresh, Suresh S; Thompson, Dana M
2016-10-01
To determine the feasibility of providing streamlined same-day evaluation and surgical management of children with recurrent otitis media or chronic serous otitis media who meet criteria for tympanostomy tube (TT) placement. Retrospective matched case series. Tertiary care children's hospital. A comparison group (age, sex, insurance product) was utilized to determine if the same-day process decreased facility time and surgical time for the care episode. A parent satisfaction survey was administered. Thirty children, with a median age of 16 months (range, 12-22 months), participated in the same-day surgery process for TT. Twenty-one patients (70.0%) were male, and these patients were matched to a comparison group (similar age, sex, and insurance product) having non-same-day (routine) TT placement. The same-day patients spent significantly less time in clinic for the preoperative physician visit (average, 15 minutes) when compared with the non-same-day patients (average, 51.5 minutes; P < .001). The operative experience for the same-day patients was similar to the non-same-day patients (average, 145 vs 137 minutes, respectively; P = .35), but the overall experience was significantly shorter for the same-day patients (average, 151 vs 196 minutes for comparisons; P < .001). All parents surveyed in the same-day group were satisfied with the efficiency of the experience. The same-day surgery process for management of children who meet the criteria for TT placement is a model of improved efficiency of care for children who suffer from otitis media. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2016.
Dynamic speckle interferometry of microscopic processes in solid state and thin biological objects
NASA Astrophysics Data System (ADS)
Vladimirov, A. P.
2015-08-01
Modernized theory of dynamic speckle interferometry is considered. It is shown that the time-average radiation intensity has the parameters characterizing the wave phase changes. It also brings forward an expression for time autocorrelation function of the radiation intensity. It is shown that with the vanishing averaging time value the formulas transform to the prior expressions. The results of experiments with high-cycle material fatigue and cell metabolism analysis conducted using the time-averaging technique are discussed. Good reproducibility of the results is demonstrated. It is specified that the upgraded technique allows analyzing accumulation of fatigue damage, detecting the crack start moment and determining its growth velocity with uninterrupted cyclic load. It is also demonstrated that in the experiments with a cell monolayer the technique allows studying metabolism change both in an individual cell and in a group of cells.
NASA Astrophysics Data System (ADS)
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Chambers, Chester G; Dada, Maqbool; Elnahal, Shereef; Terezakis, Stephanie; DeWeese, Theodore; Herman, Joseph; Williams, Kayode A
2016-10-18
We examine interactions among 3 factors that affect patient waits and use of overtime in outpatient clinics: clinic congestion, patient punctuality and physician processing rates. We hypothesise that the first 2 factors affect physician processing rates, and this adaptive physician behaviour serves to reduce waiting times and the use of overtime. 2 urban academic clinics and an affiliated suburban clinic in metropolitan Baltimore, Maryland, USA. Appointment times, patient arrival times, start of service and physician processing times were collected for 105 visits at a low-volume suburban clinic 1, 264 visits at a medium-volume academic clinic 2 and 22 266 visits at a high-volume academic clinic 3 over 3 distinct spans of time. Data from the first clinic were previously used to document an intervention to influence patient punctuality. This included a policy that tardy patients were rescheduled. Clinicians' processing times were gathered, conditioned on whether the patient or clinician was tardy to test the first hypothesis. Probability distributions of patient unpunctuality were developed preintervention and postintervention for the clinic in which the intervention took place and these data were used to seed a discrete-event simulation. Average physician processing times differ conditioned on tardiness at clinic 1 with p=0.03, at clinic 2 with p=10 -5 and at clinic 3 with p=10 -7 . Within the simulation, the adaptive physician behaviour degrades system performance by increasing waiting times, probability of overtime and the average amount of overtime used. Each of these changes is significant at the p<0.01 level. Processing times differed for patients in different states in all 3 settings studied. When present, this can be verified using data commonly collected. Ignoring these behaviours leads to faulty conclusions about the efficacy of efforts to improve clinic flow. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Reduced set averaging of face identity in children and adolescents with autism.
Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina
2015-01-01
Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.
Noise reduction in single time frame optical DNA maps
Müller, Vilhelm; Westerlund, Fredrik
2017-01-01
In optical DNA mapping technologies sequence-specific intensity variations (DNA barcodes) along stretched and stained DNA molecules are produced. These “fingerprints” of the underlying DNA sequence have a resolution of the order one kilobasepairs and the stretching of the DNA molecules are performed by surface adsorption or nano-channel setups. A post-processing challenge for nano-channel based methods, due to local and global random movement of the DNA molecule during imaging, is how to align different time frames in order to produce reproducible time-averaged DNA barcodes. The current solutions to this challenge are computationally rather slow. With high-throughput applications in mind, we here introduce a parameter-free method for filtering a single time frame noisy barcode (snap-shot optical map), measured in a fraction of a second. By using only a single time frame barcode we circumvent the need for post-processing alignment. We demonstrate that our method is successful at providing filtered barcodes which are less noisy and more similar to time averaged barcodes. The method is based on the application of a low-pass filter on a single noisy barcode using the width of the Point Spread Function of the system as a unique, and known, filtering parameter. We find that after applying our method, the Pearson correlation coefficient (a real number in the range from -1 to 1) between the single time-frame barcode and the time average of the aligned kymograph increases significantly, roughly by 0.2 on average. By comparing to a database of more than 3000 theoretical plasmid barcodes we show that the capabilities to identify plasmids is improved by filtering single time-frame barcodes compared to the unfiltered analogues. Since snap-shot experiments and computational time using our method both are less than a second, this study opens up for high throughput optical DNA mapping with improved reproducibility. PMID:28640821
NASA Astrophysics Data System (ADS)
Munz, Matthias; Oswald, Sascha E.; Schmidt, Christian
2017-04-01
Flow pattern and seasonal as well as diurnal temperature variations control ecological and biogeochemical conditions in hyporheic sediments. In particular, hyporheic temperatures have a great impact on many microbial processes. In this study we used 3-D coupled water flow and heat transport simulations applying the HydroGeoSphere code in combination with high frequent observations of hydraulic heads and temperatures for quantifying reach scale water and heat flux across the river groundwater interface and hyporheic temperature dynamics of a lowland gravel-bed river. The magnitude and dynamics of simulated temperatures matched the observed with an average mean absolute error of 0.7 °C and an average Nash Sutcliffe Efficiency of 0.87. Our results highlight that the average temperature in the hyporheic zone follows the temperature in the river which is characterized by distinct seasonal and daily temperature cycles. Individual hyporheic flow path temperature substantially varies around the average hyporheic temperature. Hyporheic flow path temperature was found to strongly depend on the flow path residence time and the temperature gradient between river and groundwater; that is, in winter the average flow path temperature of long flow paths is potentially higher compared to short flow paths. Based on the simulation results we derived a general empirical relationship, estimating the influence of hyporheic flow path residence time on hyporheic flow path temperature. Furthermore we used an empirical temperature relationship between effective temperature and respiration rate to estimate the influence of hyporheic flow path residence time and temperature on hyporheic oxygen consumption. This study highlights the relation between complex hyporheic temperature patterns, hyporheic residence times and their implications on temperature sensitive biogeochemical processes.
Badrick, Tony; Graham, Peter
2018-03-28
Internal Quality Control and External Quality Assurance are separate but related processes that have developed independently in laboratory medicine over many years. They have different sample frequencies, statistical interpretations and immediacy. Both processes have evolved absorbing new understandings of the concept of laboratory error, sample material matrix and assay capability. However, we do not believe at the coalface that either process has led to much improvement in patient outcomes recently. It is the increasing reliability and automation of analytical platforms along with improved stability of reagents that has reduced systematic and random error, which in turn has minimised the risk of running less frequent IQC. We suggest that it is time to rethink the role of both these processes and unite them into a single approach using an Average of Normals model supported by more frequent External Quality Assurance samples. This new paradigm may lead to less confusion for laboratory staff and quicker responses to and identification of out of control situations.
Revealing nonergodic dynamics in living cells from a single particle trajectory
NASA Astrophysics Data System (ADS)
Lanoiselée, Yann; Grebenkov, Denis S.
2016-05-01
We propose the improved ergodicity and mixing estimators to identify nonergodic dynamics from a single particle trajectory. The estimators are based on the time-averaged characteristic function of the increments and can thus capture additional information on the process as compared to the conventional time-averaged mean-square displacement. The estimators are first investigated and validated for several models of anomalous diffusion, such as ergodic fractional Brownian motion and diffusion on percolating clusters, and nonergodic continuous-time random walks and scaled Brownian motion. The estimators are then applied to two sets of earlier published trajectories of mRNA molecules inside live Escherichia coli cells and of Kv2.1 potassium channels in the plasma membrane. These statistical tests did not reveal nonergodic features in the former set, while some trajectories of the latter set could be classified as nonergodic. Time averages along such trajectories are thus not representative and may be strongly misleading. Since the estimators do not rely on ensemble averages, the nonergodic features can be revealed separately for each trajectory, providing a more flexible and reliable analysis of single-particle tracking experiments in microbiology.
Can a senior house officer's time be used more effectively?
Mitchell, J; Hayhurst, C; Robinson, S M
2004-09-01
To determine the amount of time senior house officers (SHO) spent performing tasks that could be delegated to a technician or administrative assistant and therefore to quantify the expected benefit that could be obtained by employing such physicians' assistants (PA). SHOs working in the emergency department were observed for one week by pre-clinical students who had been trained to code and time each task performed by SHOs. Activity was grouped into four categories (clinical, technical, administrative, and other). Those activities in the technical and administrative categories were those we believed could be performed by a PA. The SHOs worked 430 hours in total, of which only 25 hours were not coded due to lack of an observer. Of the 405 hours observed 86.2% of time was accounted for by the various codes. The process of taking a history and examining patients accounted for an average of 22% of coded time. Writing the patient's notes accounted for an average of 20% of coded time. Discussion with relatives and patients accounted for 4.7% of coded time and performing procedures accounted for 5.2% of coded time. On average across all shifts, 15% of coded time was spent doing either technical or administrative tasks. In this department an average of 15% of coded SHOs working time was spent performing administrative and technical tasks, rising to 17% of coded time during a night shift. This is equivalent to an average time of 78 minutes per 10 hour shift/SHO. Most tasks included in these categories could be performed by PAs thus potentially decreasing patient waiting times, improving risk management, allowing doctors to spend more time with their patients, and possibly improving doctors' training.
NASA Astrophysics Data System (ADS)
Wu, Zikai; Hou, Baoyu; Zhang, Hongjuan; Jin, Feng
2014-04-01
Deterministic network models have been attractive media for discussing dynamical processes' dependence on network structural features. On the other hand, the heterogeneity of weights affect dynamical processes taking place on networks. In this paper, we present a family of weighted expanded Koch networks based on Koch networks. They originate from a r-polygon, and each node of current generation produces m r-polygons including the node and whose weighted edges are scaled by factor w in subsequent evolutionary step. We derive closed-form expressions for average weighted shortest path length (AWSP). In large network, AWSP stays bounded with network order growing (0 < w < 1). Then, we focus on a special random walks and trapping issue on the networks. In more detail, we calculate exactly the average receiving time (ART). ART exhibits a sub-linear dependence on network order (0 < w < 1), which implies that nontrivial weighted expanded Koch networks are more efficient than un-weighted expanded Koch networks in receiving information. Besides, efficiency of receiving information at hub nodes is also dependent on parameters m and r. These findings may pave the way for controlling information transportation on general weighted networks.
Automated Chromium Plating Line for Gun Barrels
1979-09-01
consistent pretreatments and bath dwell times. Some of the advantages of automated processing include increased productivity (average of 20^) due to...when automated processing procedures’ are used. The current method of applying chromium electrodeposits to gun tubes is a manual, batch operation...currently practiced with rotary swaged gun tubes would substantially reduce the difficulties in automated processing . RECOMMENDATIONS
NASA Technical Reports Server (NTRS)
Lyubashevskiy, G. S.
1973-01-01
Fourier processing of automatic signals transforms direct current voltage into a numerical form through bandpass filtration in time-pulse multiplying devices. It is shown that the ratio of the interference energy to the useful signal energy is inversely proportional to the square of the product of the depth of the width modulation and the ratio of the time constant averaging to the cross-multiplied signals.
NASA Astrophysics Data System (ADS)
Verma, Gaurav; Chawla, Sanjeev; Nagarajan, Rajakumar; Iqbal, Zohaib; Albert Thomas, M.; Poptani, Harish
2017-04-01
Two-dimensional localized correlated spectroscopy (2D L-COSY) offers greater spectral dispersion than conventional one-dimensional (1D) MRS techniques, yet long acquisition times and limited post-processing support have slowed its clinical adoption. Improving acquisition efficiency and developing versatile post-processing techniques can bolster the clinical viability of 2D MRS. The purpose of this study was to implement a non-uniformly weighted sampling (NUWS) scheme for faster acquisition of 2D-MRS. A NUWS 2D L-COSY sequence was developed for 7T whole-body MRI. A phantom containing metabolites commonly observed in the brain at physiological concentrations was scanned ten times with both the NUWS scheme of 12:48 duration and a 17:04 constant eight-average sequence using a 32-channel head coil. 2D L-COSY spectra were also acquired from the occipital lobe of four healthy volunteers using both the proposed NUWS and the conventional uniformly-averaged L-COSY sequence. The NUWS 2D L-COSY sequence facilitated 25% shorter acquisition time while maintaining comparable SNR in humans (+0.3%) and phantom studies (+6.0%) compared to uniform averaging. NUWS schemes successfully demonstrated improved efficiency of L-COSY, by facilitating a reduction in scan time without affecting signal quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dooley, James H; Lanning, David N
A process of comminution of wood chips (C) having a grain direction to produce a mixture of wood particles (P), wherein the wood chips are characterized by an average length dimension (L.sub.C) as measured substantially parallel to the grain, an average width dimension (W.sub.C) as measured normal to L.sub.C and aligned cross grain, and an average height dimension (H.sub.C) as measured normal to W.sub.C and L.sub.C, and wherein the comminution process comprises the step of feeding the wood chips in a direction of travel substantially randomly to the grain direction one or more times through a counter rotating pair ofmore » intermeshing arrays of cutting discs (D) arrayed axially perpendicular to the direction of wood chip travel.« less
Structural Equation Modeling of Multivariate Time Series
ERIC Educational Resources Information Center
du Toit, Stephen H. C.; Browne, Michael W.
2007-01-01
The covariance structure of a vector autoregressive process with moving average residuals (VARMA) is derived. It differs from other available expressions for the covariance function of a stationary VARMA process and is compatible with current structural equation methodology. Structural equation modeling programs, such as LISREL, may therefore be…
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
NASA Astrophysics Data System (ADS)
Cui, Yang; Luo, Wang; Fan, Qiang; Peng, Qiwei; Cai, Yiting; Yao, Yiyang; Xu, Changfu
2018-01-01
This paper adopts a low power consumption ARM Hisilicon mobile processing platform and OV4689 camera, combined with a new skeleton extraction based on distance transform algorithm and the improved Hough algorithm for multi meters real-time reading. The design and implementation of the device were completed. Experimental results shows that The average error of measurement was 0.005MPa, and the average reading time was 5s. The device had good stability and high accuracy which meets the needs of practical application.
Langevin equation with fluctuating diffusivity: A two-state model
NASA Astrophysics Data System (ADS)
Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji
2016-07-01
Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.
Strange kinetics of bulk-mediated diffusion on lipid bilayers
Campagnola, Grace; Nepal, Kanti; Peersen, Olve B.
2016-01-01
Diffusion at solid-liquid interfaces is crucial in many technological and biophysical processes. Although its behavior seems deceivingly simple, recent studies showing passive superdiffusive transport suggest diffusion on surfaces may hide rich complexities. In particular, bulk-mediated diffusion occurs when molecules are transiently released from the surface to perform three-dimensional excursions into the liquid bulk. This phenomenon bears the dichotomy where a molecule always return to the surface but the mean jump length is infinite. Such behavior is associated with a breakdown of the central limit theorem and weak ergodicity breaking. Here, we use single-particle tracking to study the statistics of bulk-mediated diffusion on a supported lipid bilayer. We find that the time-averaged mean square displacement (MSD) of individual trajectories, the archetypal measure in diffusion processes, does not converge to the ensemble MSD but it remains a random variable, even in the long observation-time limit. The distribution of time averages is shown to agree with a Lévy flight model. Our results also unravel intriguing anomalies in the statistics of displacements. The time averaged MSD is shown to depend on experimental time and investigations of fractional moments show a scaling 〈|r(t)|q〉 ∼ tqv(q) with non-linear exponents, i.e. v(q) ≠ const. This type of behavior is termed strong anomalous diffusion and is rare among experimental observations. PMID:27095275
The impact of a lean rounding process in a pediatric intensive care unit.
Vats, Atul; Goin, Kristin H; Villarreal, Monica C; Yilmaz, Tuba; Fortenberry, James D; Keskinocak, Pinar
2012-02-01
Poor workflow associated with physician rounding can produce inefficiencies that decrease time for essential activities, delay clinical decisions, and reduce staff and patient satisfaction. Workflow and provider resources were not optimized when a pediatric intensive care unit increased by 22,000 square feet (to 33,000) and by nine beds (to 30). Lean methods (focusing on essential processes) and scenario analysis were used to develop and implement a patient-centric standardized rounding process, which we hypothesize would lead to improved rounding efficiency, decrease required physician resources, improve satisfaction, and enhance throughput. Human factors techniques and statistical tools were used to collect and analyze observational data for 11 rounding events before and 12 rounding events after process redesign. Actions included: 1) recording rounding events, times, and patient interactions and classifying them as essential, nonessential, or nonvalue added; 2) comparing rounding duration and time per patient to determine the impact on efficiency; 3) analyzing discharge orders for timeliness; 4) conducting staff surveys to assess improvements in communication and care coordination; and 5) analyzing customer satisfaction data to evaluate impact on patient experience. Thirty-bed pediatric intensive care unit in a children's hospital with academic affiliation. Eight attending pediatric intensivists and their physician rounding teams. Eight attending physician-led teams were observed for 11 rounding events before and 12 rounding events after implementation of a standardized lean rounding process focusing on essential processes. Total rounding time decreased significantly (157 ± 35 mins before vs. 121 ± 20 mins after), through a reduction in time spent on nonessential (53 ± 30 vs. 9 ± 6 mins) activities. The previous process required three attending physicians for an average of 157 mins (7.55 attending physician man-hours), while the new process required two attending physicians for an average of 121 mins (4.03 attending physician man-hours). Cumulative distribution of completed patient rounds by hour of day showed an improvement from 40% to 80% of patients rounded by 9:30 AM. Discharge data showed pediatric intensive care unit patients were discharged an average of 58.05 mins sooner (p < .05). Staff surveys showed a significant increase in satisfaction with the new process (including increased efficiency, improved physician identification, and clearer understanding of process). Customer satisfaction scores showed improvement after implementing the new process. Implementation of a lean-focused, patient-centric rounding structure stressing essential processes was associated with increased timeliness and efficiency of rounds, improved staff and customer satisfaction, improved throughput, and reduced attending physician man-hours.
Aakre, Kenneth T; Valley, Timothy B; O'Connor, Michael K
2010-03-01
Lean Six Sigma process improvement methodologies have been used in manufacturing for some time. However, Lean Six Sigma process improvement methodologies also are applicable to radiology as a way to identify opportunities for improvement in patient care delivery settings. A multidisciplinary team of physicians and staff conducted a 100-day quality improvement project with the guidance of a quality advisor. By using the framework of DMAIC (define, measure, analyze, improve, and control), time studies were performed for all aspects of patient and technologist involvement. From these studies, value stream maps for the current state and for the future were developed, and tests of change were implemented. Comprehensive value stream maps showed that before implementation of process changes, an average time of 20.95 minutes was required for completion of a bone densitometry study. Two process changes (ie, tests of change) were undertaken. First, the location for completion of a patient assessment form was moved from inside the imaging room to the waiting area, enabling patients to complete the form while waiting for the technologist. Second, the patient was instructed to sit in a waiting area immediately outside the imaging rooms, rather than in the main reception area, which is far removed from the imaging area. Realignment of these process steps, with reduced technologist travel distances, resulted in a 3-minute average decrease in the patient cycle time. This represented a 15% reduction in the initial patient cycle time with no change in staff or costs. Radiology process improvement projects can yield positive results despite small incremental changes.
NASA Astrophysics Data System (ADS)
Iannacone, J.; Berti, M.; Allievi, J.; Del Conte, S.; Corsini, A.
2013-12-01
Space borne InSAR has proven to be very valuable for landslides detection. In particular, extremely slow landslides (Cruden and Varnes, 1996) can be now clearly identified, thanks to the millimetric precision reached by recent multi-interferometric algorithms. The typical approach in radar interpretation for landslides mapping is based on average annual velocity of the deformation which is calculated over the entire times series. The Hotspot and Cluster Analysis (Lu et al., 2012) and the PSI-based matrix approach (Cigna et al., 2013) are examples of landslides mapping techniques based on average annual velocities. However, slope movements can be affected by non-linear deformation trends, (i.e. reactivation of dormant landslides, deceleration due to natural or man-made slope stabilization, seasonal activity, etc). Therefore, analyzing deformation time series is crucial in order to fully characterize slope dynamics. While this is relatively simple to be carried out manually when dealing with small dataset, the time series analysis over regional scale dataset requires automated classification procedures. Berti et al. (2013) developed an automatic procedure for the analysis of InSAR time series based on a sequence of statistical tests. The analysis allows to classify the time series into six distinctive target trends (0=uncorrelated; 1=linear; 2=quadratic; 3=bilinear; 4=discontinuous without constant velocity; 5=discontinuous with change in velocity) which are likely to represent different slope processes. The analysis also provides a series of descriptive parameters which can be used to characterize the temporal changes of ground motion. All the classification algorithms were integrated into a Graphical User Interface called PSTime. We investigated an area of about 2000 km2 in the Northern Apennines of Italy by using SqueeSAR™ algorithm (Ferretti et al., 2011). Two Radarsat-1 data stack, comprising of 112 scenes in descending orbit and 124 scenes in ascending orbit, were processed. The time coverage lasts from April 2003 to November 2012, with an average temporal frequency of 1 scene/month. Radar interpretation has been carried out by considering average annual velocities as well as acceleration/deceleration trends evidenced by PSTime. Altogether, from ascending and descending geometries respectively, this approach allowed detecting of 115 and 112 potential landslides on the basis of average displacement rate and 77 and 79 landslides on the basis of acceleration trends. In conclusion, time series analysis resulted to be very valuable for landslide mapping. In particular it highlighted areas with marked acceleration in a specific period in time while still being affected by low average annual velocity over the entire analysis period. On the other hand, even in areas with high average annual velocity, time series analysis was of primary importance to characterize the slope dynamics in terms of acceleration events.
NASA Astrophysics Data System (ADS)
Beyhaghi, Pooriya
2016-11-01
This work considers the problem of the efficient minimization of the infinite time average of a stationary ergodic process in the space of a handful of independent parameters which affect it. Problems of this class, derived from physical or numerical experiments which are sometimes expensive to perform, are ubiquitous in turbulence research. In such problems, any given function evaluation, determined with finite sampling, is associated with a quantifiable amount of uncertainty, which may be reduced via additional sampling. This work proposes the first algorithm of this type. Our algorithm remarkably reduces the overall cost of the optimization process for problems of this class. Further, under certain well-defined conditions, rigorous proof of convergence is established to the global minimum of the problem considered.
NASA Astrophysics Data System (ADS)
Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-02-01
While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.
Koziel, Jacek A; Nguyen, Lam T; Glanville, Thomas D; Ahn, Heekwon; Frana, Timothy S; Hans van Leeuwen, J
2017-10-01
A passive sampling method, using retracted solid-phase microextraction (SPME) - gas chromatography-mass spectrometry and time-weighted averaging, was developed and validated for tracking marker volatile organic compounds (VOCs) emitted during aerobic digestion of biohazardous animal tissue. The retracted SPME configuration protects the fragile fiber from buffeting by the process gas stream, and it requires less equipment and is potentially more biosecure than conventional active sampling methods. VOC concentrations predicted via a model based on Fick's first law of diffusion were within 6.6-12.3% of experimentally controlled values after accounting for VOC adsorption to the SPME fiber housing. Method detection limits for five marker VOCs ranged from 0.70 to 8.44ppbv and were statistically equivalent (p>0.05) to those for active sorbent-tube-based sampling. The sampling time of 30min and fiber retraction of 5mm were found to be optimal for the tissue digestion process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Queueing system analysis of multi server model at XYZ insurance company in Tasikmalaya city
NASA Astrophysics Data System (ADS)
Muhajir, Ahmad; Binatari, Nikenasih
2017-08-01
Queueing theory or waiting line theory is a theory that deals with the queue process from the customer comes, queue to be served, served and left on service facilities. Queue occurs because of a mismatch between the numbers of customers that will be served with the available number of services, as an example at XYZ insurance company in Tasikmalaya. This research aims to determine the characteristics of the queue system which then to optimize the number of server in term of total cost. The result shows that the queue model can be represented by (M/M/4):(GD/∞/∞), where the arrivals are Poisson distributed while the service time is following exponential distribution. The probability of idle customer service is 2,39% of the working time, the average number of customer in the queue is 3 customers, the average number of customer in a system is 6 customers, the average time of a customer spent in the queue is 15,9979 minutes, the average time a customer spends in the system is 34,4141 minutes, and the average number of busy customer servicer is 3 server. The optimized number of customer service is 5 servers, and the operational cost has minimum cost at Rp 4.323.
NASA Astrophysics Data System (ADS)
Berg, L. K.; Chand, D.; Fast, J. D.; Zelenyuk, A.; Wilson, J. M.; Sedlacek, A. J., III; Tomlinson, J. M.; Hubbe, J. M.; Comstock, J. M.; Mei, F.; Kassianov, E.; Schmid, B.
2015-12-01
Aerosol play crucial role in earth's radiative budget by scattering and absorbing solar radiation. The impact of aerosol on radiation budget depend on several factors including single scattering albedo (SSA), composition, and the growth processes, like coating or mixing. We describe findings relevant to optical properties of aerosol characterized over the Cape Cod and nearby northwest Atlantic Ocean during the Two Column Aerosol Project (TCAP) during the summer (July 2012) and winter (February 2013) campaigns. The average single scattering albedo (SSA) shows distinctly different vertical profiles during the summer and winter periods. During the summer study period, the average SSA is greater than 0.95 near surface, it increases to 0.97 until an altitude of 2.5 km, and then decreases to 0.94 at top of the column near 4 km. In contrast, during the winter study period the average SSA is less than 0.93 and decreases with height reaching an average value of 0.87 near the top of the column. The large difference in summer and winter time SSA is linked to the presence of biomass burning (BB) aerosol rather than black carbon or soot in both seasons. In our study, the BB on average is factor of two higher in free troposphere (FT) during summer and more than a factor of two higher in the boundary layer during winter. Single particle analysis indicates that the average profiles of refractory black carbon (rBC) mass are similar in both seasons. The average rBC size are similar at all altitudes sampled (0-4 km) in summer time but different during winter time. In addition, the particles sampled in the summertime FT appear to be more aged than those seen during winter. The observed large heterogeneity in SSA and its links to the particle coating and composition highlights the importance of aging and mixing processes of aerosol in this region and represents a challenge for both regional and global scale models.
Kimura, Yoshifumi; Fukuda, Masanori; Suda, Kayo; Terazima, Masahide
2010-09-16
Fluorescence dynamics of 4'-N,N-diethylamino-3-hydroxyflavone (DEAHF) and its methoxy derivative (DEAMF) in various room temperature ionic liquids (RTILs) have been studied mainly by an optical Kerr gate method. DEAMF showed a single band fluorescence whose peak shifted with time by the solvation dynamics. The averaged solvation time determined by the fluorescence peak shift was proportional to the viscosity of the solvent except for tetradecyltrihexylphosphonium bis(trifluoromethanesulfonyl)amide. The solvation times were consistent with reported values determined with different probe molecules. DEAHF showed dual fluorescence due to the normal and tautomer forms produced by the excited state intramolecular proton transfer (ESIPT), and the relative intensities were dependent on the time and the solvent cation or anion species. By using the information of the fluorescence spectrum of DEAMF, the fluorescence spectrum of DEAHF at each delay time after the photoexcitation was decomposed into the normal and the tautomer fluorescence components, respectively. The normal component showed a very fast decay simulated by a biexponential function (2-3 and 20-30 ps) with an additional slower decay component. The tautomer component showed a rise with the time constants corresponding to the faster decay of the normal form with an additional instantaneous rise. The faster dynamics of the normal and tautomer population changes were assigned to the ESIPT process, while the slower decay of the fluorescence was attributed to the population decay from the excited state through the radiative and nonradiative processes. The average ESIPT time was much faster than the averaged solvation time of RTILs. Basically, the ESIPT kinetics in RTILs is similar to those in conventional liquid solvents like acetonitrile (Chou et al. J. Phys. Chem. A 2005, 109, 3777). The faster ESIPT is interpreted in terms of the activation barrierless process from the Franck-Condon state before the solvation of the normal state in the electronic excited state. With the advance of the solvation in the excited state, the normal form becomes relatively more stable than the tautomer form, which makes the ESIPT become an activation process.
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
NASA Astrophysics Data System (ADS)
Liu, Tzu-Chi; Wu, Hau-Tieng; Chen, Ya-Hui; Chen, Ya-Han; Fang, Te-Yung; Wang, Pa-Chun; Liu, Yi-Wen
2018-05-01
The presence of click-evoked (CE) otoacoustic emissions (OAEs) has been clinically accepted as an indicator of normal cochlear processing of sounds. For treatment and diagnostic purposes, however, clinicians do not typically pay attention to the detailed spectrum and waveform of CEOAEs. A possible reason is due to the lack of noise-robust signal processing tools to estimate physiologically meaningful time-frequency properties of CEOAEs, such as the latency of spectral components. In this on-going study, we applied a modern tool called concentration of frequency and time (ConceFT, [1]) to analyze CEOAE waveforms. Randomly combined orthogonal functions are used as windowing functions for time-frequency analysis. The resulting spectrograms are subject to nonlinear time-frequency reassignment so as to enhance the concentration of time-varying sinusoidal components. The results after reassignment could be further averaged across the random choice of windows. CEOAE waveforms are acquired by a linear averaging paradigm, and longitudinal data are currently being collected from patients with Ménière's disease (MD) and a control group of normal hearing subjects. When CEOAE is present, the ConceFT plots show traces of decreasing but fluctuating instantaneous frequency against time. For comparison purposes, same processing methods are also applied to analyze CEOAE data from cochlear mechanics simulation.
2013-06-01
17 D. NAVAL TACTICAL COMMAND SUPPORT SYSTEM .........................17 1. Operational Maintenance Management System–Next Generation...Management .......................................................................................21 4. Method ...Business Administration MDT Mean Down Time MTBM Mean Time Between Maintenance NAVSUP Naval Supply Systems Command NC Not Carried NIS Not in Stock
Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo
2011-03-01
We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.
Engineering the Business of Defense Acquisition: An Analysis of Program Office Processes
2015-05-01
Information Technology and Business Process Redesign | MIT Sloan Management Review . MIT Sloan Management Review . Retrieved from http://sloanreview.mit.edu...links systems management to process execution Three Phases/ Multi-Year Effort (This Phase) Literature review Model development— Formal and...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining
Coherence-generating power of quantum dephasing processes
NASA Astrophysics Data System (ADS)
Styliaris, Georgios; Campos Venuti, Lorenzo; Zanardi, Paolo
2018-03-01
We provide a quantification of the capability of various quantum dephasing processes to generate coherence out of incoherent states. The measures defined, admitting computable expressions for any finite Hilbert-space dimension, are based on probabilistic averages and arise naturally from the viewpoint of coherence as a resource. We investigate how the capability of a dephasing process (e.g., a nonselective orthogonal measurement) to generate coherence depends on the relevant bases of the Hilbert space over which coherence is quantified and the dephasing process occurs, respectively. We extend our analysis to include those Lindblad time evolutions which, in the infinite-time limit, dephase the system under consideration and calculate their coherence-generating power as a function of time. We further identify specific families of such time evolutions that, although dephasing, have optimal (over all quantum processes) coherence-generating power for some intermediate time. Finally, we investigate the coherence-generating capability of random dephasing channels.
Applying Systems Engineering Reduces Radiology Transport Cycle Times in the Emergency Department.
White, Benjamin A; Yun, Brian J; Lev, Michael H; Raja, Ali S
2017-04-01
Emergency department (ED) crowding is widespread, and can result in care delays, medical errors, increased costs, and decreased patient satisfaction. Simultaneously, while capacity constraints on EDs are worsening, contributing factors such as patient volume and inpatient bed capacity are often outside the influence of ED administrators. Therefore, systems engineering approaches that improve throughput and reduce waste may hold the most readily available gains. Decreasing radiology turnaround times improves ED patient throughput and decreases patient waiting time. We sought to investigate the impact of systems engineering science targeting ED radiology transport delays and determine the most effective techniques. This prospective, before-and-after analysis of radiology process flow improvements in an academic hospital ED was exempt from institutional review board review as a quality improvement initiative. We hypothesized that reorganization of radiology transport would improve radiology cycle time and reduce waste. The intervention included systems engineering science-based reorganization of ED radiology transport processes, largely using Lean methodologies, and adding no resources. The primary outcome was average transport time between study order and complete time. All patients presenting between 8/2013-3/2016 and requiring plain film imaging were included. We analyzed electronic medical record data using Microsoft Excel and SAS version 9.4, and we used a two-sample t-test to compare data from the pre- and post-intervention periods. Following the intervention, average transport time decreased significantly and sustainably. Average radiology transport time was 28.7 ± 4.2 minutes during the three months pre-intervention. It was reduced by 15% in the first three months (4.4 minutes [95% confidence interval [CI] 1.5-7.3]; to 24.3 ± 3.3 min, P=0.021), 19% in the following six months (5.4 minutes, 95% CI [2.7-8.2]; to 23.3 ± 3.5 min, P=0.003), and 26% one year following the intervention (7.4 minutes, 95% CI [4.8-9.9]; to 21.3 ± 3.1 min, P=0.0001). This result was achieved without any additional resources, and demonstrated a continual trend towards improvement. This innovation demonstrates the value of systems engineering science to increase efficiency in ED radiology processes. In this study, reorganization of the ED radiology transport process using systems engineering science significantly increased process efficiency without additional resource use.
Applying Systems Engineering Reduces Radiology Transport Cycle Times in the Emergency Department
White, Benjamin A.; Yun, Brian J.; Lev, Michael H.; Raja, Ali S.
2017-01-01
Introduction Emergency department (ED) crowding is widespread, and can result in care delays, medical errors, increased costs, and decreased patient satisfaction. Simultaneously, while capacity constraints on EDs are worsening, contributing factors such as patient volume and inpatient bed capacity are often outside the influence of ED administrators. Therefore, systems engineering approaches that improve throughput and reduce waste may hold the most readily available gains. Decreasing radiology turnaround times improves ED patient throughput and decreases patient waiting time. We sought to investigate the impact of systems engineering science targeting ED radiology transport delays and determine the most effective techniques. Methods This prospective, before-and-after analysis of radiology process flow improvements in an academic hospital ED was exempt from institutional review board review as a quality improvement initiative. We hypothesized that reorganization of radiology transport would improve radiology cycle time and reduce waste. The intervention included systems engineering science-based reorganization of ED radiology transport processes, largely using Lean methodologies, and adding no resources. The primary outcome was average transport time between study order and complete time. All patients presenting between 8/2013–3/2016 and requiring plain film imaging were included. We analyzed electronic medical record data using Microsoft Excel and SAS version 9.4, and we used a two-sample t-test to compare data from the pre- and post-intervention periods. Results Following the intervention, average transport time decreased significantly and sustainably. Average radiology transport time was 28.7 ± 4.2 minutes during the three months pre-intervention. It was reduced by 15% in the first three months (4.4 minutes [95% confidence interval [CI] 1.5–7.3]; to 24.3 ± 3.3 min, P=0.021), 19% in the following six months (5.4 minutes, 95% CI [2.7–8.2]; to 23.3 ± 3.5 min, P=0.003), and 26% one year following the intervention (7.4 minutes, 95% CI [4.8–9.9]; to 21.3 ± 3.1 min, P=0.0001). This result was achieved without any additional resources, and demonstrated a continual trend towards improvement. This innovation demonstrates the value of systems engineering science to increase efficiency in ED radiology processes. Conclusion In this study, reorganization of the ED radiology transport process using systems engineering science significantly increased process efficiency without additional resource use. PMID:28435492
Empirical comparison of heuristic load distribution in point-to-point multicomputer networks
NASA Technical Reports Server (NTRS)
Grunwald, Dirk C.; Nazief, Bobby A. A.; Reed, Daniel A.
1990-01-01
The study compared several load placement algorithms using instrumented programs and synthetic program models. Salient characteristics of these program traces (total computation time, total number of messages sent, and average message time) span two orders of magnitude. Load distribution algorithms determine the initial placement for processes, a precursor to the more general problem of load redistribution. It is found that desirable workload distribution strategies will place new processes globally, rather than locally, to spread processes rapidly, but that local information should be used to refine global placement.
Moving Average Models with Bivariate Exponential and Geometric Distributions.
1985-03-01
ordinary time series and of point processes. Developments in Statistics, Vol. 1, P.R. Krishnaiah , ed. Academic Press, New York. [9] Esary, J.D. and...valued and discrete - valued time series with ARMA correlation structure. Multivariate Analysis V, P.R. Krishnaiah , ed. North-Holland. 151-166. [28
Ablation dynamics - from absorption to heat accumulation/ultra-fast laser matter interaction
NASA Astrophysics Data System (ADS)
Kramer, Thorsten; Remund, Stefan; Jäggi, Beat; Schmid, Marc; Neuenschwander, Beat
2018-05-01
Ultra-short laser radiation is used in manifold industrial applications today. Although state-of-the-art laser sources are providing an average power of 10-100 W with repetition rates of up to several megahertz, most applications do not benefit from it. On the one hand, the processing speed is limited to some hundred millimeters per second by the dynamics of mechanical axes or galvanometric scanners. On the other hand, high repetition rates require consideration of new physical effects such as heat accumulation and shielding that might reduce the process efficiency. For ablation processes, process efficiency can be expressed by the specific removal rate, ablated volume per time, and average power. The analysis of the specific removal rate for different laser parameters, like average power, repetition rate or pulse duration, and process parameters, like scanning speed or material, can be used to find the best operation point for microprocessing applications. Analytical models and molecular dynamics simulations based on the so-called two-temperature model reveal the causes for the appearance of limiting physical effects. The findings of models and simulations can be used to take advantage and optimize processing strategies.
ERIC Educational Resources Information Center
Amigo, Seth; Smith, Andrew; Ludwig, Timothy
2008-01-01
The current study investigated the effects of task-clarification, and manager verbal and graphic feedback on employee busing times at a pizza restaurant. Using an ABC design, task-clarification was provided in a memo, which described the process, priority, and goal time of busing. The busing time decreased slightly, from an average of 315 seconds…
Fought, Ellie L; Sundriyal, Vaibhav; Sosonkina, Masha; Windus, Theresa L
2017-04-30
In this work, the effect of oversubscription is evaluated, via calling 2n, 3n, or 4n processes for n physical cores, on semi-direct MP2 energy and gradient calculations and RI-MP2 energy calculations with the cc-pVTZ basis using NWChem. Results indicate that on both Intel and AMD platforms, oversubscription reduces total time to solution on average for semi-direct MP2 energy calculations by 25-45% and reduces total energy consumed by the CPU and DRAM on average by 10-15% on the Intel platform. Semi-direct gradient time to solution is shortened on average by 8-15% and energy consumption is decreased by 5-10%. Linear regression analysis shows a strong correlation between time to solution and total energy consumed. Oversubscribing during RI-MP2 calculations results in performance degradations of 30-50% at the 4n level. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
NASA Astrophysics Data System (ADS)
Alper, Ofer; Somekh-Baruch, Anelia; Pirvandy, Oz; Schaps, Malka; Yaari, Gur
2017-08-01
Geometric Brownian motion (GBM) is frequently used to model price dynamics of financial assets, and a weighted average of multiple GBMs is commonly used to model a financial portfolio. Diversified portfolios can lead to an increased exponential growth compared to a single asset by effectively reducing the effective noise. The sum of GBM processes is no longer a log-normal process and has a complex statistical properties. The nonergodicity of the weighted average process results in constant degradation of the exponential growth from the ensemble average toward the time average. One way to stay closer to the ensemble average is to maintain a balanced portfolio: keep the relative weights of the different assets constant over time. To keep these proportions constant, whenever assets values change, it is necessary to rebalance their relative weights, exposing this strategy to fees (transaction costs). Two strategies that were suggested in the past for cases that involve fees are rebalance the portfolio periodically and rebalance it in a partial way. In this paper, we study these two strategies in the presence of correlations and fees. We show that using periodic and partial rebalance strategies, it is possible to maintain a steady exponential growth while minimizing the losses due to fees. We also demonstrate how these redistribution strategies perform in a phenomenal way on real-world market data, despite the fact that not all assumptions of the model hold in these real-world systems. Our results have important implications for stochastic dynamics in general and to portfolio management in particular, as we show that there is a superior alternative to the common buy-and-hold strategy, even in the presence of correlations and fees.
Some limitations of frequency as a component of risk: an expository note.
Cox, Louis Anthony
2009-02-01
Students of risk analysis are often taught that "risk is frequency times consequence" or, more generally, that risk is determined by the frequency and severity of adverse consequences. But is it? This expository note reviews the concepts of frequency as average annual occurrence rate and as the reciprocal of mean time to failure (MTTF) or mean time between failures (MTBF) in a renewal process. It points out that if two risks (represented as two (frequency, severity) pairs for adverse consequences) have identical values for severity but different values of frequency, then it is not necessarily true that the one with the smaller value of frequency is preferable-and this is true no matter how frequency is defined. In general, there is not necessarily an increasing relation between the reciprocal of the mean time until an event occurs, its long-run average occurrences per year, and other criteria, such as the probability or expected number of times that it will happen over a specific interval of interest, such as the design life of a system. Risk depends on more than frequency and severity of consequences. It also depends on other information about the probability distribution for the time of a risk event that can become lost in simple measures of event "frequency." More flexible descriptions of risky processes, such as point process models can avoid these limitations.
NASA Astrophysics Data System (ADS)
Cackett, Edward; Troyer, Jon; Peille, Philippe; Barret, Didier
2018-01-01
Kilohertz quasi-periodic oscillations or kHz QPOs are intensity variations that occur in the X-ray band observed in neutron star low-mass X-ray binary (LMXB) systems. In such systems, matter is transferred from a secondary low-mass star to a neutron star via the process of accretion. kHz QPOs occur on the timescale of the inner accretion flow and may carry signatures of the physics of strong gravity (c2 ~ GM/R) and possibly clues to constraining the neutron star equation of state (EOS). Both the timing behavior of kHz QPOs and the time-averaged spectra of these systems have been studied extensively. No model derived from these techniques has been able to illuminate the origin of kHz QPOs. Spectral-timing is an analysis technique that can be used to derive information about the nature of physical processes occurring within the accretion flow on the timescale of the kHz QPO. To date, kHz QPOs of (4) neutron star LMXB systems have been studied with spectral-timing techniques. We present a comprehensive study of spectral-timing products of kHz QPOs from systems where data is available in the RXTE archive to demonstrate the promise of this technique to gain insights regarding the origin of kHz QPOs. Using data averaged over the entire RXTE archive, we show correlated time-lags as a function of QPO frequency and energy, as well as energy-dependent covariance spectra for the various LMXB systems where spectral-timing analysis is possible. We find similar trends in all average spectral-timing products for the objects studied. This suggests a common origin of kHz QPOs.
40 CFR 63.653 - Monitoring, recordkeeping, and implementation plan for emissions averaging.
Code of Federal Regulations, 2011 CFR
2011-07-01
... § 63.120 of subpart G; and (ii) For closed vent systems with control devices, conduct an initial design... different times, and/or in different submittals, later submittals may refer to earlier submittals instead of... controlled using a treatment process or series of treatment processes that achieves an emission reduction...
Gender Differences in the Motivational Processing of Facial Beauty
ERIC Educational Resources Information Center
Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor
2008-01-01
Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial…
Mutti-Packer, Seema; Hodgins, David C; El-Guebaly, Nady; Casey, David M; Currie, Shawn R; Williams, Robert J; Smith, Garry J; Schopflocher, Don P
2017-06-01
The objective of the current study was to examine the possible temporal associations between alcohol misuse and problem gambling symptomatology from adolescence through to young adulthood. Parallel-process latent growth curve modeling was used to examine the trajectories of alcohol misuse and symptoms of problem gambling over time. Data were from a sample of adolescents recruited for the Leisure, Lifestyle, and Lifecycle Project in Alberta, Canada (n = 436), which included 4 assessments over 5 years. There was an average decline in problem gambling symptoms followed by an accelerating upward trend as the sample reached the legal age to gamble. There was significant variation in the rate of change in problem gambling symptoms over time; not all respondents followed the same trajectory. There was an average increase in alcohol misuse over time, with significant variability in baseline levels of use and the rate of change over time. The unconditional parallel process model indicated that higher baseline levels of alcohol misuse were associated with higher baseline levels of problem gambling symptoms. In addition, higher baseline levels of alcohol misuse were associated with steeper declines in problem gambling symptoms over time. However, these between-process correlations did not retain significance when covariates were added to the model, indicating that one behavior was not a risk factor for the other. The lack of mutual influence in the problem gambling symptomatology and alcohol misuse processes suggest that there are common risk factors underlying these two behaviors, supporting the notion of a syndrome model of addiction. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Documenting pharmacist interventions on an intranet.
Simonian, Armen I
2003-01-15
The process of developing and implementing an intranet Web site for clinical intervention documentation is described. An inpatient pharmacy department initiated an organizationwide effort to improve documentation of interventions by pharmacists at its seven hospitals to achieve real-time capture of meaningful benchmarking data. Standardization of intervention types would allow the health system to contrast and compare medication use, process improvement, and patient care initiatives among its hospitals. After completing a needs assessment and reviewing current methodologies, a computerized tracking tool was developed in-house and integrated with the organization's intranet. Representatives from all hospitals agreed on content and functionality requirements for the Web site. The site was completed and activated in February 2002. Before this Web site was established, the most documented intervention types were Renal Adjustment and Clarify Dose, with a daily average of four and three, respectively. After site activation, daily averages for Renal Adjustment remained unchanged, but Clarify Dose is now documented nine times per day. Drug Information and i.v.-to-p.o. intervention types, which previously averaged less than one intervention per day, are now documented an average of four times daily. Approximately 91% of staff pharmacists are using this site. Future plans for this site include enhanced accessibility to the site with wireless personal digital assistants. The design and implementation of an intranet Web site to document pharmacists' interventions doubled the rate of intervention documentation and standardized the intervention types among hospitals in the health system.
PIV measurements in the near wakes of hollow cylinders with holes
NASA Astrophysics Data System (ADS)
Firat, Erhan; Ozkan, Gokturk M.; Akilli, Huseyin
2017-05-01
The wake flows behind fixed, hollow, rigid circular cylinders with two rows of holes connecting the front and rear stagnation lines were investigated using particle image velocimetry (PIV) for various combinations of three hole diameters, d = 0.1 D, 0.15 D, and 0.20 D, six hole-to-hole distances, l = 2 d, 3 d, 4 d, 5 d, 6 d, and 7 d, and ten angles of incidence ( α), from 0° to 45° in steps of 5°, at a Reynolds number of Re = 6,900. Time-averaged velocity distributions, instantaneous and time-averaged vorticity patterns, time-averaged streamline topology, and hot spots of turbulent kinetic energy occurred through the interaction of shear layers from the models were presented to show how the wake flow was modified by the presence of the self-issuing jets with various momentums emanating from the downstream holes. In general, as hole diameter which is directly related to jet momentum increased, the values of time-averaged wake characteristics (length of time-averaged recirculation region, vortex formation length, length of shear layers, and gap between the shear layers) increased. Irrespective to d and l tested, the values of the vortex formation length of the models are greater than that of the cylinder without hole (reference model). That is, vortex formation process was shifted downstream by aid of jets. It was found that time-averaged wake characteristics were very sensitive to α. As α increased, the variation of these characteristics can be modeled by exponential decay functions. The effect of l on the three-dimensional vortex shedding patterns in the near wake of the models was also discussed.
NASA Astrophysics Data System (ADS)
Olafsen, L. J.; Olafsen, J. S.; Eaves, I. K.
2018-06-01
We report on an experimental investigation of the time-dependent spatial intensity distribution of near-infrared idler pulses from an optical parametric oscillator measured using an infrared (IR) camera, in contrast to beam profiles obtained using traditional knife-edge techniques. Comparisons show the information gained by utilizing the thermal camera provides more detail than the spatially- or time-averaged measurements from a knife-edge profile. Synchronization, averaging, and thresholding techniques are applied to enhance the images acquired. The additional information obtained can improve the process by which semiconductor devices and other IR lasers are characterized for their beam quality and output response and thereby result in IR devices with higher performance.
Out-of-time-order fluctuation-dissipation theorem
NASA Astrophysics Data System (ADS)
Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito
2018-01-01
We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.
3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.
Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B
2011-05-01
To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D tumor localization to be on the order of 1 mm on average and 2 mm at 95 percentile for both digital and physical phantoms, and within 2 mm on average and 4 mm at 95 percentile for lung cancer patients. The results also indicate that the accuracy is not affected by the breathing pattern, be it regular or irregular. High computational efficiency can be achieved on GPU, requiring 0.1-0.3 s for each x-ray projection.
Garrett, John D.; Fear, Elise C.
2015-01-01
Prior information about the average dielectric properties of breast tissue can be implemented in microwave breast imaging techniques to improve the results. Rapidly providing this information relies on acquiring a limited number of measurements and processing these measurement with efficient algorithms. Previously, systems were developed to measure the transmission of microwave signals through breast tissue, and simplifications were applied to estimate the average properties. These methods provided reasonable estimates, but they were sensitive to multipath. In this paper, a new technique to analyze the average properties of breast tissues while addressing multipath is presented. Three steps are used to process transmission measurements. First, the effects of multipath were removed. In cases where multipath is present, multiple peaks were observed in the time domain. A Tukey window was used to time-gate a single peak and, therefore, select a single path through the breast. Second, the antenna response was deconvolved from the transmission coefficient to isolate the response from the tissue in the breast interior. The antenna response was determined through simulations. Finally, the complex permittivity was estimated using an iterative approach. This technique was validated using simulated and physical homogeneous breast models and tested with results taken from a recent patient study. PMID:25585106
Survival time of the susceptible-infected-susceptible infection process on a graph.
van de Bovenkamp, Ruud; Van Mieghem, Piet
2015-09-01
The survival time T is the longest time that a virus, a meme, or a failure can propagate in a network. Using the hitting time of the absorbing state in an uniformized embedded Markov chain of the continuous-time susceptible-infected-susceptible (SIS) Markov process, we derive an exact expression for the average survival time E[T] of a virus in the complete graph K_{N} and the star graph K_{1,N-1}. By using the survival time, instead of the average fraction of infected nodes, we propose a new method to approximate the SIS epidemic threshold τ_{c} that, at least for K_{N} and K_{1,N-1}, correctly scales with the number of nodes N and that is superior to the epidemic threshold τ_{c}^{(1)}=1/λ_{1} of the N-intertwined mean-field approximation, where λ_{1} is the spectral radius of the adjacency matrix of the graph G. Although this new approximation of the epidemic threshold offers a more intuitive understanding of the SIS process, it remains difficult to compare outbreaks in different graph types. For example, the survival in an arbitrary graph seems upper bounded by the complete graph and lower bounded by the star graph as a function of the normalized effective infection rate τ/τ_{c}^{(1)}. However, when the average fraction of infected nodes is used as a basis for comparison, the virus will survive in the star graph longer than in any other graph, making the star graph the worst-case graph instead of the complete graph. Finally, in non-Markovian SIS, the distribution of the spreading attempts over the infectious period of a node influences the survival time, even if the expected number of spreading attempts during an infectious period (the non-Markovian equivalent of the effective infection rate) is kept constant. Both early and late infection attempts lead to shorter survival times. Interestingly, just as in Markovian SIS, the survival times appear to be exponentially distributed, regardless of the infection and curing time distributions.
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
Lee, Jeongjin; Kim, Kyoung Won; Kim, So Yeon; Kim, Bohyoung; Lee, So Jung; Kim, Hyoung Jung; Lee, Jong Seok; Lee, Moon Gyu; Song, Gi-Won; Hwang, Shin; Lee, Sung-Gyu
2014-09-01
To assess the feasibility of semiautomated MR volumetry using gadoxetic acid-enhanced MRI at the hepatobiliary phase compared with manual CT volumetry. Forty potential live liver donor candidates who underwent MR and CT on the same day, were included in our study. Semiautomated MR volumetry was performed using gadoxetic acid-enhanced MRI at the hepatobiliary phase. We performed the quadratic MR image division for correction of the bias field inhomogeneity. With manual CT volumetry as the reference standard, we calculated the average volume measurement error of the semiautomated MR volumetry. We also calculated the mean of the number and time of the manual editing, edited volume, and total processing time. The average volume measurement errors of the semiautomated MR volumetry were 2.35% ± 1.22%. The average values of the numbers of editing, operation times of manual editing, edited volumes, and total processing time for the semiautomated MR volumetry were 1.9 ± 0.6, 8.1 ± 2.7 s, 12.4 ± 8.8 mL, and 11.7 ± 2.9 s, respectively. Semiautomated liver MR volumetry using hepatobiliary phase gadoxetic acid-enhanced MRI with the quadratic MR image division is a reliable, easy, and fast tool to measure liver volume in potential living liver donors. Copyright © 2013 Wiley Periodicals, Inc.
Yang, Brian W; Iorio, Matthew L; Day, Charles S
2017-03-15
The 2 main routes of medical device approval through the U.S. Food and Drug Administration are the premarket approval (PMA) process, which requires clinical trials, and the 510(k) premarket notification, which exempts devices from clinical trials if they are substantially equivalent to an existing device. Recently, there has been growing concern regarding the safety of devices approved through the 510(k) premarket notification. The PMA process decreases the potential for device recall; however, it is substantially more costly and time-consuming. Investors and medical device companies are only willing to invest in devices if they can expect to recoup their investment within a timeline of roughly 7 years. Our study utilizes financial modeling to assess the financial feasibility of approving various orthopaedic medical devices through the 510(k) and PMA processes. The expected time to recoup investment through the 510(k) process ranged from 0.585 years to 7.715 years, with an average time of 2.4 years; the expected time to recoup investment through the PMA route ranged from 2.9 years to 24.5 years, with an average time of 8.5 years. Six of the 13 orthopaedic device systems that we analyzed would require longer than our 7-year benchmark to recoup the investment costs of the PMA process. With the 510(k) premarket notification, only 1 device system would take longer than 7 years to recoup its investment costs. Although the 510(k) premarket notification has demonstrated safety concerns, broad requirements for PMA authorization may limit device innovation for less-prevalent orthopaedic conditions. As a result, new approval frameworks may be beneficial. Our report demonstrates how current regulatory policies can potentially influence orthopaedic device innovation.
Yamaguchi, Hironori; Tsurita, Giichirou; Ueno, Shoogo; Watanabe, Soichi; Wake, Kanako; Taki, Masao; Nagawa, Hirokazu
2003-05-01
This study sought to clarify the effects of exposure to electromagnetic waves (EMW) used in cellular phones on learning and memory processes. Sprague-Dawley rats were exposed for either 1 h daily for 4 days or for 4 weeks to a pulsed 1439 MHz time division multiple access (TDMA) field in a carousel type exposure system. At the brain, average specific absorption rate (SAR) was 7.5 W/kg, and the whole body average SAR was 1.7 W/kg. Other subjects were exposed at the brain average SAR of 25 W/kg and the whole body average SAR of 5.7 W/kg for 45 min daily for 4 days. Learning and memory were evaluated by reversal learning in a food rewarded T-maze, in which rats learned the location of food (right or left) by using environmental cues. The animals exposed to EMW with the brain average SAR of 25 W/kg for 4 days showed statistically significant decreases in the transition in number of correct choices in the reversal task, compared to sham exposed or cage control animals. However, rats exposed to the brain average SAR of 7.5 W/kg for either 4 days or for 4 weeks showed no T-maze performance impairments. Intraperitoneal temperatures, as measured by a fiber optic thermometer, increased in the rats exposed to the brain average SAR of 25 W/kg but remained the same for the brain average SAR of 7.5 W/kg. The SAR of a standard cellular phone is restricted to a maximum of 2 W/kg averaged over 10 g tissue. These results suggest that the exposure to a TDMA field at levels about four times stronger than emitted by cellular phones does not affect the learning and memory processes when there are no thermal effects. Copyright 2003 Wiley-Liss, Inc.
Cross-Layer Modeling Framework for Energy-Efficient Resilience
2014-04-01
functional block diagram of the software architecture of PEARL, which stands for: Power Efficient and Resilient Embedded Processing with Real - Time ... DVFS ). The goal of the run- time manager is to minimize power consumption, while maintaining system resilience targets (on average) and meeting... real - time performance targets. The integrated performance, power and resilience models are nothing but the analytical modeling toolkit described in
Complexity and the Fractional Calculus
2013-01-01
these trajectories over the entire Lotka - Volterra cycle thereby generating the mistaken impression that the resulting average trajectory reaches...interpreted as a form of phase decor- relation process rather than one with friction. The fractional version of the popular Lotka - Volterra ecological...trajectory is an ordinary Lotka - Volterra cycle in the operational time . Transitioning from the operational time to the chronological time spreads
Couch, James R; Petersen, Martin; Rice, Carol; Schubauer-Berigan, Mary K
2011-05-01
To construct a job-exposure matrix (JEM) for an Ohio beryllium processing facility between 1953 and 2006 and to evaluate temporal changes in airborne beryllium exposures. Quantitative area- and breathing-zone-based exposure measurements of airborne beryllium were made between 1953 and 2006 and used by plant personnel to estimate daily weighted average (DWA) exposure concentrations for sampled departments and operations. These DWA measurements were used to create a JEM with 18 exposure metrics, which was linked to the plant cohort consisting of 18,568 unique job, department and year combinations. The exposure metrics ranged from quantitative metrics (annual arithmetic/geometric average DWA exposures, maximum DWA and peak exposures) to descriptive qualitative metrics (chemical beryllium species and physical form) to qualitative assignment of exposure to other risk factors (yes/no). Twelve collapsed job titles with long-term consistent industrial hygiene samples were evaluated using regression analysis for time trends in DWA estimates. Annual arithmetic mean DWA estimates (overall plant-wide exposures including administration, non-production, and production estimates) for the data by decade ranged from a high of 1.39 μg/m(3) in the 1950s to a low of 0.33 μg/m(3) in the 2000s. Of the 12 jobs evaluated for temporal trend, the average arithmetic DWA mean was 2.46 μg/m(3) and the average geometric mean DWA was 1.53 μg/m(3). After the DWA calculations were log-transformed, 11 of the 12 had a statistically significant (p < 0.05) decrease in reported exposure over time. The constructed JEM successfully differentiated beryllium exposures across jobs and over time. This is the only quantitative JEM containing exposure estimates (average and peak) for the entire plant history.
A Note on Spatial Averaging and Shear Stresses Within Urban Canopies
NASA Astrophysics Data System (ADS)
Xie, Zheng-Tong; Fuka, Vladimir
2018-04-01
One-dimensional urban models embedded in mesoscale numerical models may place several grid points within the urban canopy. This requires an accurate parametrization for shear stresses (i.e. vertical momentum fluxes) including the dispersive stress and momentum sinks at these points. We used a case study with a packing density of 33% and checked rigorously the vertical variation of spatially-averaged total shear stress, which can be used in a one-dimensional column urban model. We found that the intrinsic spatial average, in which the volume or area of the solid parts are not included in the average process, yield greater time-spatial average of total stress within the canopy and a more evident abrupt change at the top of the buildings than the comprehensive spatial average, in which the volume or area of the solid parts are included in the average.
The Power Plant Operating Data Based on Real-time Digital Filtration Technology
NASA Astrophysics Data System (ADS)
Zhao, Ning; Chen, Ya-mi; Wang, Hui-jie
2018-03-01
Real-time monitoring of the data of the thermal power plant was the basis of accurate analyzing thermal economy and accurate reconstruction of the operating state. Due to noise interference was inevitable; we need real-time monitoring data filtering to get accurate information of the units and equipment operating data of the thermal power plant. Real-time filtering algorithm couldn’t be used to correct the current data with future data. Compared with traditional filtering algorithm, there were a lot of constraints. First-order lag filtering method and weighted recursive average filtering method could be used for real-time filtering. This paper analyzes the characteristics of the two filtering methods and applications for real-time processing of the positive spin simulation data, and the thermal power plant operating data. The analysis was revealed that the weighted recursive average filtering method applied to the simulation and real-time plant data filtering achieved very good results.
Risk analysis and detection of thrombosis by measurement of electrical resistivity of blood.
Sapkota, Achyut; Asakura, Yuta; Maruyama, Osamu; Kosaka, Ryo; Yamane, Takashi; Takei, Masahiro
2013-01-01
Monitoring of thrombogenic process is very important in ventricular assistance devices (VADs) used as temporary or permanent measures in patients with advanced heart failure. Currently, there is a lack of a system which can perform a real-time monitoring of thrombogenic activity. Electrical signals vary according to the change in concentration of coagulation factors as well as the distribution of blood cells, and thus have potential to detect the thrombogenic process in an early stage. In the present work, we have made an assessment of an instrumentation system exploiting the electrical properties of blood. The experiments were conducted using bovine blood. Electrical resistance tomography with eight-electrode sensor was used to monitor the spatio-temporal change in electrical resistivity of blood in thrombogenic and non-thrombogenic condition. Under non-thrombogenic condition, the resistivity was uniform across the cross-section and average resistivity monotonically decreased with time before remaining almost flat. In contrary, under thrombogenic condition, there was non-uniform distribution across the cross-section, and average resistivity fluctuated with time.
The dynamics of multimodal integration: The averaging diffusion model.
Turner, Brandon M; Gao, Juan; Koenig, Scott; Palfy, Dylan; L McClelland, James
2017-12-01
We combine extant theories of evidence accumulation and multi-modal integration to develop an integrated framework for modeling multimodal integration as a process that unfolds in real time. Many studies have formulated sensory processing as a dynamic process where noisy samples of evidence are accumulated until a decision is made. However, these studies are often limited to a single sensory modality. Studies of multimodal stimulus integration have focused on how best to combine different sources of information to elicit a judgment. These studies are often limited to a single time point, typically after the integration process has occurred. We address these limitations by combining the two approaches. Experimentally, we present data that allow us to study the time course of evidence accumulation within each of the visual and auditory domains as well as in a bimodal condition. Theoretically, we develop a new Averaging Diffusion Model in which the decision variable is the mean rather than the sum of evidence samples and use it as a base for comparing three alternative models of multimodal integration, allowing us to assess the optimality of this integration. The outcome reveals rich individual differences in multimodal integration: while some subjects' data are consistent with adaptive optimal integration, reweighting sources of evidence as their relative reliability changes during evidence integration, others exhibit patterns inconsistent with optimality.
NASA Astrophysics Data System (ADS)
Liu, Xiaojia; An, Haizhong; Wang, Lijun; Guan, Qing
2017-09-01
The moving average strategy is a technical indicator that can generate trading signals to assist investment. While the trading signals tell the traders timing to buy or sell, the moving average cannot tell the trading volume, which is a crucial factor for investment. This paper proposes a fuzzy moving average strategy, in which the fuzzy logic rule is used to determine the strength of trading signals, i.e., the trading volume. To compose one fuzzy logic rule, we use four types of moving averages, the length of the moving average period, the fuzzy extent, and the recommend value. Ten fuzzy logic rules form a fuzzy set, which generates a rating level that decides the trading volume. In this process, we apply genetic algorithms to identify an optimal fuzzy logic rule set and utilize crude oil futures prices from the New York Mercantile Exchange (NYMEX) as the experiment data. Each experiment is repeated for 20 times. The results show that firstly the fuzzy moving average strategy can obtain a more stable rate of return than the moving average strategies. Secondly, holding amounts series is highly sensitive to price series. Thirdly, simple moving average methods are more efficient. Lastly, the fuzzy extents of extremely low, high, and very high are more popular. These results are helpful in investment decisions.
Time operators in stroboscopic wave-packet basis and the time scales in tunneling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bokes, P.
2011-03-15
We demonstrate that the time operator that measures the time of arrival of a quantum particle into a chosen state can be defined as a self-adjoint quantum-mechanical operator using periodic boundary conditions and applied to wave functions in energy representation. The time becomes quantized into discrete eigenvalues; and the eigenstates of the time operator, i.e., the stroboscopic wave packets introduced recently [Phys. Rev. Lett. 101, 046402 (2008)], form an orthogonal system of states. The formalism provides simple physical interpretation of the time-measurement process and direct construction of normalized, positive definite probability distribution for the quantized values of the arrival time.more » The average value of the time is equal to the phase time but in general depends on the choice of zero time eigenstate, whereas the uncertainty of the average is related to the traversal time and is independent of this choice. The general formalism is applied to a particle tunneling through a resonant tunneling barrier in one dimension.« less
[Triage duration times: a prospective descriptive study in a level 1° emergency department].
Bambi, Stefano; Ruggeri, Marco
2017-01-01
Triage is the most important tool for clinical risk management in emergency departments (ED). The timing measurement of its phases is fundamental to establish indicators and standards for the optimization of the system. To evaluate the duration time of the phases of triage; to evaluate some variables exerting influence on nurses' performance. prospective descriptive study performed in the ED of Careggi Teaching Hospital in Florence. 14 nurses enrolled by stratified randomization proportion (1/3 of the whole staff ), according to classes of length of service. Triage processes on 150 adult patients were recorded. The mean age of nurses was 39.7 years (SD ± 5.2, range 29-50); the average length of service was 10.3 years (SD ± 4.4, range 3-18); average of triage experience was 8.6 years (SD ± 4.3, range 2-13). The median time from patient's arrival to the end of the triage process was 04': 04" (range 00':47"- 18':08"); the median duration of triage was 01':11" (range 00':07" -11':27"). The length of service and triage experience did not influence the medians of recorded intervals of time, but there were some limitations due to the low sample size. Interruptions were observed 111 (74%) of triage cases. the recorded triage time intervals were similar to those reported in international literature. Actions are needed to reduce the impact of interruptions on triage process' times.
NASA Technical Reports Server (NTRS)
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
Behind the Match Process: Is There Any Financial Difference Lurking Below the Specialty of Choice?
Oladeji, Lasun O; Raley, James A; Smith, Stephen; Perez, Jorge L; McGwin, Gerald; Ponce, Brent A
2016-12-01
The Match was developed in response to a chaotic residency selection process. While the match has remained relatively unchanged since it was introduced, the number of medical school graduates has increased at a rate outpacing the number of residency positions leading to a more competitive process for applicants. In May 2014, an 18-question mixed-response questionnaire was distributed to fourth year allopathic medical students via an E-mail distribution list for student affairs representatives. The individual surveys were accessible via SurveyMonkey and available for completion over the course of a 4-week period. Approximately 65.1 per cent of students performed at least one audition rotation and documented average expenditures of $2494 on housing, food, and transportation. The average applicant applied to 32 programs and attended 12 interviews while spending $4420 on the interview trail. Applicants for surgical programs applied to approximately 42 programs and attended 13 interviews compared with primary care applicants who averaged 23 programs (P < 0.001) and attended 12 interviews (P = 0.002). Surgical applicants averaged 20 days on the interview trail while spending $5500 ($423/interview) on housing, food, and transportation compared with primary care applicants averaged 19 days away from home (P < 0.05) and spending $3400 ($283/interview) on these same items (P < 0.001). The findings in our study indicate that the "Match process" contributes to the financial burden of graduating medical students and it is more expensive and time consuming for the candidates interested in surgical specialties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padrino-Inciarte, Juan Carlos; Ma, Xia; VanderHeyden, W. Brian
General ensemble phase averaged equations for multiphase flows have been specialized for the simulation of the steam assisted gravity drainage (SAGD) process. In the average momentum equation, fluid-solid and fluid-fluid viscous interactions are represented by separate force terms. This equation has a form similar to that of Darcy’s law for multiphase flow but augmented by the fluid-fluid viscous forces. Models for these fluid-fluid interactions are suggested and implemented into the numerical code CartaBlanca. Numerical results indicate that the model captures the main features of the multiphase flow in the SAGD process, but the detailed features, such as plumes are missed.more » We find that viscous coupling among the fluid phases is important. Advection time scales for the different fluids differ by several orders of magnitude because of vast viscosity differences. Numerically resolving all of these time scales is time consuming. To address this problem, we introduce a steam surrogate approximation to increase the steam advection time scale, while keeping the mass and energy fluxes well approximated. This approximation leads to about a 40-fold speed-up in execution speed of the numerical calculations at the cost of a few percent error in the relevant quantities.« less
Padrino-Inciarte, Juan Carlos; Ma, Xia; VanderHeyden, W. Brian; ...
2016-01-01
General ensemble phase averaged equations for multiphase flows have been specialized for the simulation of the steam assisted gravity drainage (SAGD) process. In the average momentum equation, fluid-solid and fluid-fluid viscous interactions are represented by separate force terms. This equation has a form similar to that of Darcy’s law for multiphase flow but augmented by the fluid-fluid viscous forces. Models for these fluid-fluid interactions are suggested and implemented into the numerical code CartaBlanca. Numerical results indicate that the model captures the main features of the multiphase flow in the SAGD process, but the detailed features, such as plumes are missed.more » We find that viscous coupling among the fluid phases is important. Advection time scales for the different fluids differ by several orders of magnitude because of vast viscosity differences. Numerically resolving all of these time scales is time consuming. To address this problem, we introduce a steam surrogate approximation to increase the steam advection time scale, while keeping the mass and energy fluxes well approximated. This approximation leads to about a 40-fold speed-up in execution speed of the numerical calculations at the cost of a few percent error in the relevant quantities.« less
DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes
NASA Astrophysics Data System (ADS)
Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.
2008-12-01
A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.
Bardin, Ann; Primeau, Francois; Lindsay, Keith; ...
2016-07-21
Newton-Krylov solvers for ocean tracers have the potential to greatly decrease the computational costs of spinning up deep-ocean tracers, which can take several thousand model years to reach equilibrium with surface processes. One version of the algorithm uses offline tracer transport matrices to simulate an annual cycle of tracer concentrations and applies Newton’s method to find concentrations that are periodic in time. Here we present the impact of time-averaging the transport matrices on the equilibrium values of an ideal-age tracer. We compared annually-averaged, monthly-averaged, and 5-day-averaged transport matrices to an online simulation using the ocean component of the Community Earthmore » System Model (CESM) with a nominal horizontal resolution of 1° × 1° and 60 vertical levels. We found that increasing the time resolution of the offline transport model reduced a low age bias from 12% for the annually-averaged transport matrices, to 4% for the monthly-averaged transport matrices, and to less than 2% for the transport matrices constructed from 5-day averages. The largest differences were in areas with strong seasonal changes in the circulation, such as the Northern Indian Ocean. As a result, for many applications the relatively small bias obtained using the offline model makes the offline approach attractive because it uses significantly less computer resources and is simpler to set up and run.« less
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Lopez-Fagundo, Cristina; Mitchel, Jennifer A.; Ramchal, Talisha D.; Dingle, Yu-Ting L.; Hoffman-Kim, Diane
2013-01-01
The path created by aligned Schwann cells (SCs) after nerve injury underlies peripheral nerve regeneration. We developed geometric bioinspired substrates to extract key information needed for axon guidance by deconstructing the topographical cues presented by SCs. We have previously reported materials that directly replicate SC topography with micro- and nanoscale resolution, but a detailed explanation of the means of directed axon extension on SC topography has not yet been described. Here, using neurite tracing and time-lapse microscopy, we analyzed the SC features that influence axon guidance. Novel poly(dimethylsiloxane) materials, fabricated via photolithography, incorporated bioinspired topographical components with the shapes and sizes of aligned SCs, namely somas and processes, where the length of the processes were varied but the soma geometry and dimensions were kept constant. Rat dorsal root ganglia neurites aligned to all materials presenting bioinspired topography after a 5 days in culture and to bioinspired materials presenting soma and process features after only 17 hours in culture. Key findings of this study were: Neurite response to underlying bioinspired topographical features was time dependent, where at 5 days, neurites aligned most strongly to materials presenting combinations of soma and process features, with higher than average density of either process or soma features; but at 17 hours they aligned more strongly to materials presenting average densities of soma and process features and to materials presenting process features only. These studies elucidate the influence of SC topography on axon guidance in a time-dependent setting and have implications for the optimization of nerve regeneration strategies. PMID:23557939
Fixation of strategies with the Moran and Fermi processes in evolutionary games
NASA Astrophysics Data System (ADS)
Liu, Xuesong; He, Mingfeng; Kang, Yibin; Pan, Qiuhui
2017-10-01
A model of stochastic evolutionary game dynamics with finite population was built. It combines the standard Moran and Fermi rules with two strategies cooperation and defection. We obtain the expressions of fixation probabilities and fixation times. The one-third rule which has been found in the frequency dependent Moran process also holds for our model. We obtain the conditions of strategy being an evolutionarily stable strategy in our model, and then make a comparison with the standard Moran process. Besides, the analytical results show that compared with the standard Moran process, fixation occurs with higher probabilities under a prisoner's dilemma game and coordination game, but with lower probabilities under a coexistence game. The simulation result shows that the fixation time in our mixed process is lower than that in the standard Fermi process. In comparison with the standard Moran process, fixation always takes more time on average in spatial populations, regardless of the game. In addition, the fixation time decreases with the growth of the number of neighbors.
NASA Astrophysics Data System (ADS)
Gong, Yubing; Xie, Huijuan
2017-09-01
Using spike-timing-dependent plasticity (STDP), we study the effect of channel noise on temporal coherence and synchronization of adaptive scale-free Hodgkin-Huxley neuronal networks with time delay. It is found that the spiking regularity and spatial synchronization of the neurons intermittently increase and decrease as channel noise intensity is varied, exhibiting transitions of temporal coherence and synchronization. Moreover, this phenomenon depends on time delay, STDP, and network average degree. As time delay increases, the phenomenon is weakened, however, there are optimal STDP and network average degree by which the phenomenon becomes strongest. These results show that channel noise can intermittently enhance the temporal coherence and synchronization of the delayed adaptive neuronal networks. These findings provide a new insight into channel noise for the information processing and transmission in neural systems.
Michael, Claire W; Naik, Kalyani; McVicker, Michael
2013-05-01
We developed a value stream map (VSM) of the Papanicolaou test procedure to identify opportunities to reduce waste and errors, created a new VSM, and implemented a new process emphasizing Lean tools. Preimplementation data revealed the following: (1) processing time (PT) for 1,140 samples averaged 54 hours; (2) 27 accessioning errors were detected on review of 357 random requisitions (7.6%); (3) 5 of the 20,060 tests had labeling errors that had gone undetected in the processing stage. Four were detected later during specimen processing but 1 reached the reporting stage. Postimplementation data were as follows: (1) PT for 1,355 samples averaged 31 hours; (2) 17 accessioning errors were detected on review of 385 random requisitions (4.4%); and (3) no labeling errors were undetected. Our results demonstrate that implementation of Lean methods, such as first-in first-out processes and minimizing batch size by staff actively participating in the improvement process, allows for higher quality, greater patient safety, and improved efficiency.
Hurst, David; Sharpe, Sharon; Yeager, Valerie A
We assessed whether administrative preparedness processes that were intended to expedite the acquisition of goods and services during a public health emergency affect estimated procurement and contracting cycle times. We obtained data from 2014-2015 applications to the Hospital Preparedness Program and Public Health Emergency Preparedness (HPP-PHEP) cooperative agreements. We compared the estimated procurement and contracting cycle times of 61 HPP-PHEP awardees that did and did not have certain administrative processes in place. Certain processes, such as statutes allowing for procuring and contracting on the open market, had an effect on reducing the estimated cycle times for obtaining goods and services. Other processes, such as cooperative purchasing agreements, also had an effect on estimated procurement time. For example, awardees with statutes that permitted them to obtain goods and services in the open market had an average procurement cycle time of 6 days; those without such statutes had a cycle time of 17 days ( P = .04). PHEP awardees should consider adopting these or similar processes in an effort to reduce cycle times.
Preformulation Studies of Selected Pretreatment and Therapeutic Compounds
1982-07-01
Mefloquine Hydrochloride) Placebo Tablets 39 Manufacturing Formula 40 Final Weight Variation 42 Disintegration Test 43 In-Process Control Analyses 44...average "weight of 563.9 mg. Disintegration times in 900 ml of water (37°C) were around 35 seconds for six tablets , Each tablet had an average thickness...WGROS ( M an .va*. ohm It nseew M nev op ,adIpit 6p 6"Wee dmme6) Antimalarials; preformulations; antileishminiasie; capsule production; tablet
Modelling and optimization of semi-solid processing of 7075 Al alloy
NASA Astrophysics Data System (ADS)
Binesh, B.; Aghaie-Khafri, M.
2017-09-01
The new modified strain-induced melt activation (SIMA) process presented by Binesh and Aghaie-Khafri was optimized using a response surface methodology to improve the thixotropic characteristics of semi-solid 7075 alloy. The responses, namely the average grain size and the shape factor, were considered as functions of three independent input variables: effective strain, isothermal holding temperature and time. Mathematical models for the responses were developed using the regression analysis technique, and the adequacy of the models was validated by the analysis of variance method. The calculated results correlated fairly well with the experiments. It was found that all the first- and second-order terms of the independent parameters and the interactive terms of the effective strain and holding time were statistically significant for the responses. In order to simultaneously optimize the responses, the desirable values for the effective strain, holding temperature and time were predicted to be 5.1, 609 °C and 14 min, respectively, when employing the desirability function approach. Based on the optimization results, a significant improvement in the average grain size and shape factor of the semi-solid slurry prepared by the new modified SIMA process was observed.
Computing return times or return periods with rare event algorithms
NASA Astrophysics Data System (ADS)
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
Integrated Ion Exchange Regeneration Process for Drinking Water
2010-04-01
Volatile Organic Carbon 1 . INTRODUCTION - This report describes an evaluation of the synthesis of an ion exchange (IX) process for perchlorate...Absorption Spectroscopy 1 ,2-dibromo-3- chloropropane Dibenzofurans Disinfectants and disinfection by-products Decafluorotriphenylphosphine The...Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing instructions
NASA Astrophysics Data System (ADS)
Piecuch, Christopher G.; Landerer, Felix W.; Ponte, Rui M.
2018-05-01
Monthly ocean bottom pressure solutions from the Gravity Recovery and Climate Experiment (GRACE), derived using surface spherical cap mass concentration (MC) blocks and spherical harmonics (SH) basis functions, are compared to tide gauge (TG) monthly averaged sea level data over 2003-2015 to evaluate improved gravimetric data processing methods near the coast. MC solutions can explain ≳ 42% of the monthly variance in TG time series over broad shelf regions and in semi-enclosed marginal seas. MC solutions also generally explain ˜5-32 % more TG data variance than SH estimates. Applying a coastline resolution improvement algorithm in the GRACE data processing leads to ˜ 31% more variance in TG records explained by the MC solution on average compared to not using this algorithm. Synthetic observations sampled from an ocean general circulation model exhibit similar patterns of correspondence between modeled TG and MC time series and differences between MC and SH time series in terms of their relationship with TG time series, suggesting that observational results here are generally consistent with expectations from ocean dynamics. This work demonstrates the improved quality of recent MC solutions compared to earlier SH estimates over the coastal ocean, and suggests that the MC solutions could be a useful tool for understanding contemporary coastal sea level variability and change.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A., Jr.
1989-01-01
Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.
Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel
NASA Technical Reports Server (NTRS)
Shalkhauser, Mary JO; Whyte, Wayne A.
1991-01-01
Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
The Actual (Un)usefulness of the Bologna System in Medical Education.
Masic, Izet; Begic, Edin
2016-04-01
Faculty of Medicine, University of Sarajevo has officially started working on 22.11.1944, and is the oldest faculty in the medical field in Bosnia and Herzegovina. At the same time there are two systems of organization of the teaching process, the old system and the Bologna system. To analyze the implementation of the Bologna system, and making an overview of its justification. Answers from questionnaires from total of 459 students were analyzed (197 who had studied under the old system and 262 who studied under the Bologna system), so total of four generations of the Bologna system. They filled out a questionnaire in which they evaluated the teaching process. Student's opinion about quality of medical education was measured by modified Lickert scale. Students of old system are older than students of the Bologna process, whose average age is increasing from generation to generation, given the growing number of students who repeat a year. All students of old system repeated an academic year once or several times (p <0.05). Analysis of average grades showed statistically significant difference (p <0.05), where students in the Bologna system had higher averages than students who were studying under the old system. The presence of large number of female students, in both systems is significant (p <0.05). Out of 33 questions about satisfaction of class, 15 were answered with better average grade from students of the Bologna system. A slight improvement in the Bologna system is in terms of the evaluation of the quality of the educational process (teachers, methods, effects). The only significant progress has been proven in terms of rating the degree of computerization of the educational process-general records on enrolled students (old system vs Bologna system-3,44 vs 3,63), record of attendance (3,47 vs 3,73), obtaining certificates (3,08 vs 3,84), method of registration of exam (2,98 vs 3,71), method of practical exam (3,06 vs 3,36) and theoretical methods of taking exam (3,01 vs 3,14). Average grades where no average grade, on any issue, does not reach grade 4 of Likert Scale, talks about real problems of education in medical field. In a relatively large sample (four generations of students) true benefit and progress of the Bologna system has not been proven, in comparison to the old system. Bologna system has eased the students in the administrative sense by introduction of computerization of faculties, but the old problems and old questions about the organization's process and delivery have not been eliminated.
Time averaging of NMR chemical shifts in the MLF peptide in the solid state.
De Gortari, Itzam; Portella, Guillem; Salvatella, Xavier; Bajaj, Vikram S; van der Wel, Patrick C A; Yates, Jonathan R; Segall, Matthew D; Pickard, Chris J; Payne, Mike C; Vendruscolo, Michele
2010-05-05
Since experimental measurements of NMR chemical shifts provide time and ensemble averaged values, we investigated how these effects should be included when chemical shifts are computed using density functional theory (DFT). We measured the chemical shifts of the N-formyl-L-methionyl-L-leucyl-L-phenylalanine-OMe (MLF) peptide in the solid state, and then used the X-ray structure to calculate the (13)C chemical shifts using the gauge including projector augmented wave (GIPAW) method, which accounts for the periodic nature of the crystal structure, obtaining an overall accuracy of 4.2 ppm. In order to understand the origin of the difference between experimental and calculated chemical shifts, we carried out first-principles molecular dynamics simulations to characterize the molecular motion of the MLF peptide on the picosecond time scale. We found that (13)C chemical shifts experience very rapid fluctuations of more than 20 ppm that are averaged out over less than 200 fs. Taking account of these fluctuations in the calculation of the chemical shifts resulted in an accuracy of 3.3 ppm. To investigate the effects of averaging over longer time scales we sampled the rotameric states populated by the MLF peptides in the solid state by performing a total of 5 micros classical molecular dynamics simulations. By averaging the chemical shifts over these rotameric states, we increased the accuracy of the chemical shift calculations to 3.0 ppm, with less than 1 ppm error in 10 out of 22 cases. These results suggests that better DFT-based predictions of chemical shifts of peptides and proteins will be achieved by developing improved computational strategies capable of taking into account the averaging process up to the millisecond time scale on which the chemical shift measurements report.
Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu
2017-06-15
In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparison of C5 and C6 Aqua-MODIS Dark Target Aerosol Validation
NASA Technical Reports Server (NTRS)
Munchak, Leigh A.; Levy, Robert C.; Mattoo, Shana
2014-01-01
We compare C5 and C6 validation to compare the C6 10 km aerosol product against the well validated and trusted aerosol product on global and regional scales. Only the 10 km aerosol product is evaluated in this study, validation of the new C6 3 km aerosol product still needs to be performed. Not all of the time series has processed yet for C5 or C6, and the years processed for the 2 products is not exactly the same (this work is preliminary!). To reduce the impact of outlier observations, MODIS is spatially averaged within 27.5 km of the AERONET site, and AERONET is temporatally averaged within 30 minutes of the MODIS overpass time. Only high quality (QA = 3 over land, QA greater than 0 over ocean) pixels are included in the mean.
NASA Astrophysics Data System (ADS)
Laurantzon, F.; Örlü, R.; Segalini, A.; Alfredsson, P. H.
2010-12-01
Vortex flowmeters are commonly employed in technical applications and are obtainable in a variety of commercially available types. However their robustness and accuracy can easily be impaired by environmental conditions, such as inflow disturbances and/or pulsating conditions. Various post-processing techniques of the vortex signal have been used, but all of these methods are so far targeted on obtaining an improved estimate of the time-averaged bulk velocity. Here, on the other hand, we propose, based on wavelet analysis, a straightforward way to utilize the signal from a vortex shedder to extract the time-resolved and thereby the phase-averaged velocity under pulsatile flow conditions. The method was verified with hot-wire and laser Doppler velocimetry measurements.
Optimal weighted averaging of event related activity from acquisitions with artifacts.
Vollero, Luca; Petrichella, Sara; Innello, Giulio
2016-08-01
In several biomedical applications that require the signal processing of biological data, the starting procedure for noise reduction is the ensemble averaging of multiple repeated acquisitions (trials). This method is based on the assumption that each trial is composed of two additive components: (i) a time-locked activity related to some sensitive/stimulation phenomenon (ERA, Event Related Activity in the following) and (ii) a sum of several other non time-locked background activities. The averaging aims at estimating the ERA activity under very low Signal to Noise and Interference Ratio (SNIR). Although averaging is a well established tool, its performance can be improved in the presence of high-power disturbances (artifacts) by a trials classification and removal stage. In this paper we propose, model and evaluate a new approach that avoids trials removal, managing trials classified as artifact-free and artifact-prone with two different weights. Based on the model, a weights tuning is possible and through modeling and simulations we show that, when optimally configured, the proposed solution outperforms classical approaches.
Yang, Li-Xin; Zhang, Gui-Xiang; Yang, Li-Xue; He, Yan
2012-01-01
Acrylamide (AA) concentrations in instant noodles (90 samples, covering 10 different brands) from Hebei Province of China were determined by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The instant noodles were sampled from the southern and northern areas of Hebei Province (Shijiazhuang and Tangshan, respectively). The average content of AA for all 10 instant noodle brands was 6-145 µg/kg. The average content of AA in fried instant noodles was 4.47 times of those in non-fried ones, indicating the influence of the frying process. The average content of AA in instant noodles from Shijiazhuang was 1.64 times that of the samples from Tangshan (56 µg/kg). Eighty-four percent of the instant noodle samples in Hebei were contaminated with AA, with an average content of 80 µg/kg. These observations will be helpful for evaluating individual exposure to AA from instant noodles in China.
2017-12-01
inefficiencies of a more complex system. Additional time may also be due to the longer distances traveled . The fulfillment time for a requisition to...Approved OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time ...advanced manufacturing methods with additive manufacturing. This work decomposes the additive manufacturing processes into 11 primary functions. The time
NASA Astrophysics Data System (ADS)
Hulot, G.; Khokhlov, A.; Johnson, C. L.
2012-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). Yet, numerous numerical dynamo simulations show that some departures from such a simple geometry is to be expected, not least because of the heterogeneous thermal core-mantle boundary conditions that the convecting mantle imposes on the geodynamo. Indeed, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) would suggest this is the case. However, assessing the significance of such minor departures from the GAD is particularly challenging, because non-linear directional data are sensitive not only to the time-averaged component of the field, but also to its time fluctuating component, known as the paleosecular variation (PSV). This means that in addition to data errors, PSV also must be taken into account when assessing any lava flow directional data based claims of departures of the TAF from the GAD. Furthermore, because of limited age information for these data , it is necessary to assess departures from the GAD by resorting to a statistical approach. We report recent progress using an approach we have suggested and further developed (Khokhlov et al., Geophysical Journal International, 2001, 2006) to test the compatibility of combined time-averaged (TAF) and paleosecular variation (PSV) field models, against any lava flow paleomagnetic database, asssuming that these TAF and PSV models are defined within the Giant Gaussian Process statistical framework. In particular we will show how sensitive statistical measures of the compatibility of a combined set of TAF and PSV models with a given directional database can be defined. These measures can be used to test published TAF and PSV models with updated 0-5 Ma lava flow paleomagnetic data sets. They also lay the groundwork for designing inverse methods better suited to seek the minimum required departure of the TAF from the GAD.
Optimal protocols for slowly driven quantum systems.
Zulkowski, Patrick R; DeWeese, Michael R
2015-09-01
The design of efficient quantum information processing will rely on optimal nonequilibrium transitions of driven quantum systems. Building on a recently developed geometric framework for computing optimal protocols for classical systems driven in finite time, we construct a general framework for optimizing the average information entropy for driven quantum systems. Geodesics on the parameter manifold endowed with a positive semidefinite metric correspond to protocols that minimize the average information entropy production in finite time. We use this framework to explicitly compute the optimal entropy production for a simple two-state quantum system coupled to a heat bath of bosonic oscillators, which has applications to quantum annealing.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics.
Wendt, Dorothea; Brand, Thomas; Kollmeier, Birger
2014-01-01
An eye-tracking paradigm was developed for use in audiology in order to enable online analysis of the speech comprehension process. This paradigm should be useful in assessing impediments in speech processing. In this paradigm, two scenes, a target picture and a competitor picture, were presented simultaneously with an aurally presented sentence that corresponded to the target picture. At the same time, eye fixations were recorded using an eye-tracking device. The effect of linguistic complexity on language processing time was assessed from eye fixation information by systematically varying linguistic complexity. This was achieved with a sentence corpus containing seven German sentence structures. A novel data analysis method computed the average tendency to fixate the target picture as a function of time during sentence processing. This allowed identification of the point in time at which the participant understood the sentence, referred to as the decision moment. Systematic differences in processing time were observed as a function of linguistic complexity. These differences in processing time may be used to assess the efficiency of cognitive processes involved in resolving linguistic complexity. Thus, the proposed method enables a temporal analysis of the speech comprehension process and has potential applications in speech audiology and psychoacoustics. PMID:24950184
NASA Astrophysics Data System (ADS)
Kowalska, Małgorzata; Janas, Sławomir; Woźniak, Magdalena
2018-04-01
The aim of this work was the presentation of an alternative method of determination of the total dry mass content in processed cheese. The authors claim that the presented method can be used in industry's quality control laboratories for routine testing and for quick in-process control. For the test purposes both reference method of determination of dry mass in processed cheese and moisture analyzer method were used. The tests were carried out for three different kinds of processed cheese. In accordance with the reference method, the sample was placed on a layer of silica sand and dried at the temperature of 102 °C for about 4 h. The moisture analyzer test required method validation, with regard to drying temperature range and mass of the analyzed sample. Optimum drying temperature of 110 °C was determined experimentally. For Hochland cream processed cheese sample, the total dry mass content, obtained using the reference method, was 38.92%, whereas using the moisture analyzer method, it was 38.74%. An average analysis time in case of the moisture analyzer method was 9 min. For the sample of processed cheese with tomatoes, the reference method result was 40.37%, and the alternative method result was 40.67%. For the sample of cream processed cheese with garlic the reference method gave value of 36.88%, and the alternative method, of 37.02%. An average time of those determinations was 16 min. Obtained results confirmed that use of moisture analyzer is effective. Compliant values of dry mass content were obtained for both of the used methods. According to the authors, the fact that the measurement took incomparably less time for moisture analyzer method, is a key criterion of in-process control and final quality control method selection.
NASA Astrophysics Data System (ADS)
Khokhlov, A.; Hulot, G.; Johnson, C. L.
2013-12-01
It is well known that the geometry of the recent time-averaged paleomagnetic field (TAF) is very close to that of a geocentric axial dipole (GAD). However, many TAF models recovered from averaging lava flow paleomagnetic directional data (the most numerous and reliable of all data) suggest that significant additional terms, in particular quadrupolar (G20) and octupolar (G30) zonal terms, likely contribute. The traditional way in which most such TAF models are recovered uses an empirical estimate for paleosecular variation (PSV) that is subject to limitations imposed by the limited age information available for such data. In this presentation, we will report on a new way to recover the TAF, using an inverse modeling approach based on the so-called Giant Gaussian Process (GGP) description of the TAF and PSV, and various statistical tools we recently made available (see Khokhlov and Hulot, Geophysical Journal International, 2013, doi: 10.1093/gji/ggs118). First results based on high quality data published from the Time-Averaged Field Investigations project (see Johnson et al., G-cubed, 2008, doi:10.1029/2007GC001696) clearly show that both the G20 and G30 terms are very well constrained, and that optimum values fully consistent with the data can be found. These promising results lay the groundwork for use of the method with more extensive data sets, to search for possible additional non-zonal departures of the TAF from the GAD.
Time Series in Education: The Analysis of Daily Attendance in Two High Schools
ERIC Educational Resources Information Center
Koopmans, Matthijs
2011-01-01
This presentation discusses the use of a time series approach to the analysis of daily attendance in two urban high schools over the course of one school year (2009-10). After establishing that the series for both schools were stationary, they were examined for moving average processes, autoregression, seasonal dependencies (weekly cycles),…
Qian, Xinyi Lisa; Yarnal, Careen M.; Almeida, David M.
2013-01-01
This study tested the applicability of moderation and mediation models to leisure time as a stress coping resource. Analyzing eight-day diary data (N=2,022), we examined the within-person process of using leisure time to cope with daily stressors. We found that relatively high daily stress frequency, while reducing positive affect, prompted an individual to allocate more time to leisure than usual, which then increased positive affect, thus partially remedying the damage by high daily stress frequency. This within-person process, however, is significantly stronger among those with less leisure time on average than leisure-rich individuals. The findings support a partial counteractive mediation model, demonstrate between-person difference in the within-person coping process, and reveal the importance of positive affect as a coping outcome. PMID:25221350
Qian, Xinyi Lisa; Yarnal, Careen M; Almeida, David M
2014-01-01
This study tested the applicability of moderation and mediation models to leisure time as a stress coping resource. Analyzing eight-day diary data (N=2,022), we examined the within -person process of using leisure time to cope with daily stressors. We found that relatively high daily stress frequency, while reducing positive affect, prompted an individual to allocate more time to leisure than usual, which then increased positive affect, thus partially remedying the damage by high daily stress frequency. This within-person process, however, is significantly stronger among those with less leisure time on average than leisure-rich individuals. The findings support a partial counteractive mediation model, demonstrate between-person difference in the within-person coping process, and reveal the importance of positive affect as a coping outcome.
Progress in Operational Analysis of Launch Vehicles in Nonstationary Flight
NASA Technical Reports Server (NTRS)
James, George; Kaouk, Mo; Cao, Timothy
2013-01-01
This paper presents recent results in an ongoing effort to understand and develop techniques to process launch vehicle data, which is extremely challenging for modal parameter identification. The primary source of difficulty is due to the nonstationary nature of the situation. The system is changing, the environment is not steady, and there is an active control system operating. Hence, the primary tool for producing clean operational results (significant data lengths and data averaging) is not available to the user. This work reported herein uses a correlation-based two step operational modal analysis approach to process the relevant data sets for understanding and development of processes. A significant drawback for such processing of short time histories is a series of beating phenomena due to the inability to average out random modal excitations. A recursive correlation process coupled to a new convergence metric (designed to mitigate the beating phenomena) is the object of this study. It has been found in limited studies that this process creates clean modal frequency estimates but numerically alters the damping.
Using A3 thinking to improve the STAT medication process.
Manojlovich, Milisa; Chase, Valerie J; Mack, Megan; Conroy, Meghan K; Belanger, Karen; Zawol, Debbie; Corr, Karen M; Fowler, Karen E; Viglianti, Elizabeth
2014-08-01
Although the term STAT conveys a sense of urgency, it is sometimes used to circumvent a system that may be too slow to accomplish tasks in a timely manner. We describe a quality-improvement project undertaken by a US Department of Veterans Affairs (VA) hospital to improve the STAT medication process. We adapted A3 Thinking, a problem-solving process common in Lean organizations, to our problem. In the discovery phase, a color-coded flow map of the existing process was constructed, and a real-time STAT order was followed in a modified "Go to the Gemba" exercise. In the envisioning phase, the team brainstormed to come up with as many improvement ideas as possible, which were then prioritized based on the anticipated effort and impact. The team then identified initial experiments to be carried out in the experimentation phase; each experiment followed a standard Plan-Do-Study-Act cycle. On average, the number of STAT medications ordered per month decreased by 9.5%. The average time from STAT order entry to administration decreased by 21%, and time from medication delivery to administration decreased by 26%. Improvements were also made in technician awareness of STAT medications and nurse notification of STAT medication delivery. Adapting A3 Thinking for process improvement was a low-cost/low-tech option for a VA facility. The A3 Thinking process led to a better understanding of the meaning of STAT across disciplines, and promoted a collaborative culture in which other hospital-wide problems may be addressed in the future. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Performance enhancement of various real-time image processing techniques via speculative execution
NASA Astrophysics Data System (ADS)
Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.
1996-03-01
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
Influences of rolling method on deformation force in cold roll-beating forming process
NASA Astrophysics Data System (ADS)
Su, Yongxiang; Cui, Fengkui; Liang, Xiaoming; Li, Yan
2018-03-01
In process, the research object, the gear rack was selected to study the influence law of rolling method on the deformation force. By the mean of the cold roll forming finite element simulation, the variation regularity of radial and tangential deformation was analysed under different rolling methods. The variation of deformation force of the complete forming racks and the single roll during the steady state under different rolling modes was analyzed. The results show: when upbeating and down beating, radial single point average force is similar, the tangential single point average force gap is bigger, the gap of tangential single point average force is relatively large. Add itionally, the tangential force at the time of direct beating is large, and the dire ction is opposite with down beating. With directly beating, deformation force loading fast and uninstall slow. Correspondingly, with down beating, deformat ion force loading slow and uninstall fast.
Preference for facial averageness: Evidence for a common mechanism in human and macaque infants
Damon, Fabrice; Méary, David; Quinn, Paul C.; Lee, Kang; Simpson, Elizabeth A.; Paukner, Annika; Suomi, Stephen J.; Pascalis, Olivier
2017-01-01
Human adults and infants show a preference for average faces, which could stem from a general processing mechanism and may be shared among primates. However, little is known about preference for facial averageness in monkeys. We used a comparative developmental approach and eye-tracking methodology to assess visual attention in human and macaque infants to faces naturally varying in their distance from a prototypical face. In Experiment 1, we examined the preference for faces relatively close to or far from the prototype in 12-month-old human infants with human adult female faces. Infants preferred faces closer to the average than faces farther from it. In Experiment 2, we measured the looking time of 3-month-old rhesus macaques (Macaca mulatta) viewing macaque faces varying in their distance from the prototype. Like human infants, macaque infants looked longer to faces closer to the average. In Experiments 3 and 4, both species were presented with unfamiliar categories of faces (i.e., macaque infants tested with adult macaque faces; human infants and adults tested with infant macaque faces) and showed no prototype preferences, suggesting that the prototypicality effect is experience-dependent. Overall, the findings suggest a common processing mechanism across species, leading to averageness preferences in primates. PMID:28406237
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Weber, Sabrina; Das, Gürbüz; Waldmann, Karl-Heinz; Gauly, Matthias
2014-01-01
Isoflurane-anaesthesia combined with an analgesic represents a welfare-friendly method of pain mitigation for castration of piglets. However, it requires an inhaler device, which is uneconomic for small farms. Sharing a device among farms may be an economical option if the shared use does not increase labour time and the resulting costs. This study aimed to investigate the amount and components of labour time required for piglet castration with isoflurane anaesthesia performed with stationary and shared devices. Piglets (N = 1579) were anaesthetised with isoflurane (using either stationary or shared devices) and castrated.The stationary devices were used in a group (n = 5) of larger farms (84 sows/farm on an average), whereas smaller farms (n = 7; 32 sows/farm on an average) shared one device. Each farm was visited four times and labour time for each process-step was recorded. The complete process included machine set-up, anaesthesia and castration by a practitioner, and preparation, collection and transport of piglets by a farmer. Labour time of the complete process was increased (P = 0.012) on farms sharing a device (266 s/piglet) compared to farms using stationary devices (177 s/ piglet), due to increased time for preparation (P = 0.055), castration (P = 0.026) and packing (P = 0.010) when sharing a device. However, components of the time budget of farms using stationary or shared devices did not differ significantly (P > 0.05). Cost arising from time spent by farmers did not differ considerably between the use of stationary (0.28 Euro per piglet) and shared (0.26 Euro) devices. It is concluded that costs arising from the increased labour time due to sharing a device can be considered marginal, since the high expenses originating from purchasing an inhaler device are shared among several farms.
NASA Astrophysics Data System (ADS)
Llewellyn-Jones, David; Good, Simon; Corlett, Gary
A pc-based analysis package has been developed, for the dual purposes of, firstly, providing ‘quick-look' capability to research workers inspecting long time-series of global satellite datasets of Sea-surface Temperature (SST); and, secondly, providing an introduction for students, either undergraduates, or advanced high-school students to the characteristics of commonly used analysis techniques for large geophysical data-sets from satellites. Students can also gain insight into the behaviour of some basic climate-related large-scale or global processes. The package gives students immediate access to up to 16 years of continuous global SST data, mainly from the Advanced Along-Track Scanning Radiometer, currently flying on ESA's Envisat satellite. The data are available and are presented in the form of monthly averages and spatial averaged to half-degree or one-sixth degree longitude-latitude grids. There are simple button-operated facilities for defining and calculating box-averages; producing time-series of such averages; defining and displaying transects and their evolution over time; and the examination anomalous behaviour by displaying the difference between observed values and values derived from climatological means. By using these facilities a student rapidly gains familiarity with such processes as annual variability, the El Nĩo effect, as well as major current systems n such as the Gulf Stream and other climatically important phenomena. In fact, the student is given immediate insights into the basic methods of examining geophysical data in a research context, without needing to acquire special analysis skills are go trough lengthy data retrieval and preparation procedures which are more generally required, as precursors to serious investigation, in the research laboratory. This software package, called the Leicester AAATSR Global Analyser (LAGA), is written in a well-known and widely used analysis language and the package can be run by using software that is readily available free-of-charge.
Sticker, Elisabeth; Schmidt, Claudia; Steins, Gisela
2003-01-01
Coping with a chronic illness challenges children and adolescents in addition to their normal developmental tasks. This double challenge probably endangers the development of a stable self-esteem. The present investigation explores the possibility whether these processes are different with respect to the kind of illness. Chronic illnesses such as obesity and congenital heart defects (CHD) serve as examples in comparing two samples (8-16 years): obesity (N = 54) as visible and partly controllable illness (with respect to the course of illness) vs. congenital heart disease (N = 56) as invisible and uncontrollable illness (with respect to the origin and course of illness). Self-esteem is measured by a scale (ALS) which focuses on the public areas "school" and "leisure time" and the private area "family". Children and adolescents with CHD (especially females) display an above-average positive self-esteem in all areas. Children and adolescents with obesity mainly display an average self-esteem, the females scoring above-average for the private area "family", the males scoring below-average for the public area "leisure time". Furthermore, leisure-time related self-esteem is significantly lower for obese than for CHD subjects. These specific relations implicate differential accentuations for intervention programs.
Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet
NASA Astrophysics Data System (ADS)
Hsu, C. M.; Huang, R. F.
2013-07-01
The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.
Development of Electro-Optical Standard Processes for Application
2011-11-01
AERONAUTICS AND SPACE ADMINISTRATION DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE DISTRIBUTION IS UNLIMITED Report Documentation Page Form ApprovedOMB No...0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...DISTRIBUTION/AVAILABILITY STATEMENT Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT Defines the process of
Stochastic processes on multiple scales: averaging, decimation and beyond
NASA Astrophysics Data System (ADS)
Bo, Stefano; Celani, Antonio
The recent advances in handling microscopic systems are increasingly motivating stochastic modeling in a large number of physical, chemical and biological phenomena. Relevant processes often take place on widely separated time scales. In order to simplify the description, one usually focuses on the slower degrees of freedom and only the average effect of the fast ones is retained. It is then fundamental to eliminate such fast variables in a controlled fashion, carefully accounting for their net effect on the slower dynamics. We shall present how this can be done by either decimating or coarse-graining the fast processes and discuss applications to physical, biological and chemical examples. With the same tools we will address the fate of functionals of the stochastic trajectories (such as residence times, counting statistics, fluxes, entropy production, etc.) upon elimination of the fast variables. In general, for functionals, such elimination can present additional difficulties. In some cases, it is not possible to express them in terms of the effective trajectories on the slow degrees of freedom but additional details of the fast processes must be retained. We will focus on such cases and show how naive procedures can lead to inconsistent results.
Cost analysis of a project to digitize classic articles in neurosurgery*
Bauer, Kathleen
2002-01-01
In summer 2000, the Cushing/Whitney Medical Library at Yale University began a demonstration project to digitize classic articles in neurosurgery from the late 1800s and early 1900s. The objective of the first phase of the project was to measure the time and costs involved in digitization, and those results are reported here. In the second phase, metadata will be added to the digitized articles, and the project will be publicized. Thirteen articles were scanned using optical character recognition (OCR) software, and the resulting text files were carefully proofread. Time for photocopying, scanning, and proofreading were recorded. This project achieved an average cost per item (total pages plus images) of $4.12, a figure at the high end of average costs found in other studies. This project experienced high costs for two reasons. First, the articles contained many images, which required extra processing. Second, the older fonts and the poor condition of many of these articles complicated the OCR process. The average article cost $84.46 to digitize. Although costs were high, the selection of historically important articles maximized the benefit gained from the investment in digitization. PMID:11999182
Cost analysis of a project to digitize classic articles in neurosurgery.
Bauer, Kathleen
2002-04-01
In summer 2000, the Cushing/Whitney Medical Library at Yale University began a demonstration project to digitize classic articles in neurosurgery from the late 1800s and early 1900s. The objective of the first phase of the project was to measure the time and costs involved in digitization, and those results are reported here. In the second phase, metadata will be added to the digitized articles, and the project will be publicized. Thirteen articles were scanned using optical character recognition (OCR) software, and the resulting text files were carefully proofread. Time for photocopying, scanning, and proofreading were recorded. This project achieved an average cost per item (total pages plus images) of $4.12, a figure at the high end of average costs found in other studies. This project experienced high costs for two reasons. First, the articles contained many images, which required extra processing. Second, the older fonts and the poor condition of many of these articles complicated the OCR process. The average article cost $84.46 to digitize. Although costs were high, the selection of historically important articles maximized the benefit gained from the investment in digitization.
Fluorescence correlation spectroscopy: the case of subdiffusion.
Lubelski, Ariel; Klafter, Joseph
2009-03-18
The theory of fluorescence correlation spectroscopy is revisited here for the case of subdiffusing molecules. Subdiffusion is assumed to stem from a continuous-time random walk process with a fat-tailed distribution of waiting times and can therefore be formulated in terms of a fractional diffusion equation (FDE). The FDE plays the central role in developing the fluorescence correlation spectroscopy expressions, analogous to the role played by the simple diffusion equation for regular systems. Due to the nonstationary nature of the continuous-time random walk/FDE, some interesting properties emerge that are amenable to experimental verification and may help in discriminating among subdiffusion mechanisms. In particular, the current approach predicts 1), a strong dependence of correlation functions on the initial time (aging); 2), sensitivity of correlation functions to the averaging procedure, ensemble versus time averaging (ergodicity breaking); and 3), that the basic mean-squared displacement observable depends on how the mean is taken.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steffen, Jason H.; Hotchkiss, Jon
We report the results of an experimental comparison of different airplane boarding methods. This test was conducted in a mock 757 fuselage, located on a Southern California soundstage, with 12 rows of six seats and a single aisle. Five methods were tested using 72 passengers of various ages. We found a significant reduction in the boarding times of optimized methods over traditional methods. These improved methods, if properly implemented, could result in a significant savings to airline companies. The process of boarding an airplane is of interest to a variety of groups. The public is interested both as a curiosity,more » as it is something that they may regularly experience, and as a consumer, as their experiences good or bad can affect their loyalties. Airline companies and their employees also have a stake in an efficient boarding procedure as time saved in the boarding process may result is monetary savings, in the quality of interactions with passengers, and in the application of human resources to the general process of preparing an airplane for departure. A recent study (Nyquist and McFadden, 2008) indicates that the average cost to an airline company for each minute of time spent at the terminal is roughly $30. Thus, each minute saved in the turn-around time of a flight has the potential to generate over $16,000,000 in annual savings (assuming an average of 1500 flights per day). While the boarding process may not be the primary source of delay in returning an airplane to the skies, reducing the boarding time may effectively eliminate passenger boarding as a contributor in any meaningful measure. Consequently, subsequent efforts to streamline the other necessary tasks, such as refueling and maintenance, would be rewarded with a material reduction in time at the gate for each flight.« less
Experimental test of airplane boarding methods
Steffen, Jason H.; Hotchkiss, Jon
2011-10-26
We report the results of an experimental comparison of different airplane boarding methods. This test was conducted in a mock 757 fuselage, located on a Southern California soundstage, with 12 rows of six seats and a single aisle. Five methods were tested using 72 passengers of various ages. We found a significant reduction in the boarding times of optimized methods over traditional methods. These improved methods, if properly implemented, could result in a significant savings to airline companies. The process of boarding an airplane is of interest to a variety of groups. The public is interested both as a curiosity,more » as it is something that they may regularly experience, and as a consumer, as their experiences good or bad can affect their loyalties. Airline companies and their employees also have a stake in an efficient boarding procedure as time saved in the boarding process may result is monetary savings, in the quality of interactions with passengers, and in the application of human resources to the general process of preparing an airplane for departure. A recent study (Nyquist and McFadden, 2008) indicates that the average cost to an airline company for each minute of time spent at the terminal is roughly $30. Thus, each minute saved in the turn-around time of a flight has the potential to generate over $16,000,000 in annual savings (assuming an average of 1500 flights per day). While the boarding process may not be the primary source of delay in returning an airplane to the skies, reducing the boarding time may effectively eliminate passenger boarding as a contributor in any meaningful measure. Consequently, subsequent efforts to streamline the other necessary tasks, such as refueling and maintenance, would be rewarded with a material reduction in time at the gate for each flight.« less
Transmembrane protein CD93 diffuses by a continuous time random walk.
NASA Astrophysics Data System (ADS)
Goiko, Maria; de Bruyn, John; Heit, Bryan
Molecular motion within the cell membrane is a poorly-defined process. In this study, we characterized the diffusion of the transmembrane protein CD93. By careful analysis of the dependence of the ensemble-averaged mean squared displacement (EA-MSD, r2) on time t and the ensemble-averaged, time-averaged MSD (EA-TAMSD, δ2) on lag time τ and total measurement time T, we showed that the motion of CD93 is well-described by a continuous-time random walk (CTRW). CD93 tracks were acquired using single particle tracking. The tracks were classified as confined or free, and the behavior of the MSD analyzed. EA-MSDs of both populations grew non-linearly with t, indicative of anomalous diffusion. Their EA-TAMSDs were found to depend on both τ and T, indicating non-ergodicity. Free molecules had r2 tα and δ2 (τ /T 1 - α) , with α 0 . 5 , consistent with a CTRW. Mean maximal excursion analysis supported this result. Confined CD93 had r2 t0 and δ2 (τ / T) α , with α 0 . 3 , consistent with a confined CTRW. CTRWs are described by a series of random jumps interspersed with power-law distributed waiting times, and may arise due to the interactions of CD93 with the endocytic machinery. NSERC.
Kusev, Petko; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Juliusson, Asgeir; Chater, Nick
2018-01-01
When attempting to predict future events, people commonly rely on historical data. One psychological characteristic of judgmental forecasting of time series, established by research, is that when people make forecasts from series, they tend to underestimate future values for upward trends and overestimate them for downward ones, so-called trend-damping (modeled by anchoring on, and insufficient adjustment from, the average of recent time series values). Events in a time series can be experienced sequentially (dynamic mode), or they can also be retrospectively viewed simultaneously (static mode), not experienced individually in real time. In one experiment, we studied the influence of presentation mode (dynamic and static) on two sorts of judgment: (a) predictions of the next event (forecast) and (b) estimation of the average value of all the events in the presented series (average estimation). Participants' responses in dynamic mode were anchored on more recent events than in static mode for all types of judgment but with different consequences; hence, dynamic presentation improved prediction accuracy, but not estimation. These results are not anticipated by existing theoretical accounts; we develop and present an agent-based model-the adaptive anchoring model (ADAM)-to account for the difference between processing sequences of dynamically and statically presented stimuli (visually presented data). ADAM captures how variation in presentation mode produces variation in responses (and the accuracy of these responses) in both forecasting and judgment tasks. ADAM's model predictions for the forecasting and judgment tasks fit better with the response data than a linear-regression time series model. Moreover, ADAM outperformed autoregressive-integrated-moving-average (ARIMA) and exponential-smoothing models, while neither of these models accounts for people's responses on the average estimation task. Copyright © 2017 The Authors. Cognitive Science published by Wiley Periodicals, Inc. on behalf of Cognitive Science Society.
Resnick, Cory M; Daniels, Kimberly M; Flath-Sporn, Susan J; Doyle, Michael; Heald, Ronald; Padwa, Bonnie L
2016-11-01
To determine the effects on time, cost, and complication rates of integrating physician assistants (PAs) into the procedural components of an outpatient oral and maxillofacial surgery practice. This is a prospective cohort study of patients from the Department of Plastic and Oral Surgery at Boston Children's Hospital who underwent removal of 4 impacted third molars with intravenous sedation in our outpatient facility. Patients were separated into the "no PA group" and PA group. Process maps were created to capture all activities from room preparation to patient discharge, and all activities were timed for each case. A time-driven activity-based costing method was used to calculate the average times and costs from the provider's perspective for each group. Complication rates were calculated during the periods for both groups. Descriptive statistics were calculated, and significance was set at P < .05. The total process time did not differ significantly between groups, but the average total procedure cost decreased by $75.08 after the introduction of PAs (P < .001). The time that the oral and maxillofacial surgeon was directly involved in the procedure decreased by an average of 19.2 minutes after the introduction of PAs (P < .001). No significant differences in postoperative complications were found. The addition of PAs into the procedural components of an outpatient oral and maxillofacial surgery practice resulted in decreased costs whereas complication rates remained constant. The increased availability of the oral and maxillofacial surgeon after the incorporation of PAs allows for more patients to be seen during a clinic session, which has the potential to further increase efficiency and revenue. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Greenwood, Eric II; Schmitz, Fredric H.
2009-01-01
A new method of separating the contributions of helicopter main and tail rotor noise sources is presented, making use of ground-based acoustic measurements. The method employs time-domain de-Dopplerization to transform the acoustic pressure time-history data collected from an array of ground-based microphones to the equivalent time-history signals observed by an array of virtual inflight microphones traveling with the helicopter. The now-stationary signals observed by the virtual microphones are then periodically averaged with the main and tail rotor once per revolution triggers. The averaging process suppresses noise which is not periodic with the respective rotor, allowing for the separation of main and tail rotor pressure time-histories. The averaged measurements are then interpolated across the range of directivity angles captured by the microphone array in order to generate separate acoustic hemispheres for the main and tail rotor noise sources. The new method is successfully applied to ground-based microphone measurements of a Bell 206B3 helicopter and demonstrates the strong directivity characteristics of harmonic noise radiation from both the main and tail rotors of that helicopter.
Reengineering a cardiovascular surgery service.
Tunick, P A; Etkin, S; Horrocks, A; Jeglinski, G; Kelly, J; Sutton, P
1997-04-01
Reengineering, involving the radical redesign of business processes, has been used successfully in a variety of health care settings. In 1994 New York University (NYU) Medical Center (MC) launched its first reengineering team, whose purpose was to redesign the entire process of caring for patients-from referral to discharge-on the cardiovascular (CV) surgery service. REENIGINEERING TEAM: The multidisciplinary CV Surgery Reengineering Team was charged with two goals: improving customer (patient, family, and referring physician) satisfaction and improving profitability. The methodology to be used was based on a reengineering philosophy-discarding basic assumptions and designing the patient care process from the ground up. THE TRANSFER-IN INITIATIVE: A survey of NYU cardiologists, distributed in April 1994, suggested that the organization was considered a difficult place to transfer patients. The team's recommendations led to a new, streamlined transfer-in policy. The average waiting time from when a referring physician requested a patient transfer and the time when an NYUMC physician accepted the transfer decreased from an average of 9 hours under the old system to immediate acceptance. Three customer satisfaction task forces implemented multiple programs to make the service more user friendly. In addition, referrals increased and length of stay decreased, without an adverse impact on the mortality rate. For the first time at NYUMC, a multidisciplinary team was given the mandate to achieve major changes in an entire patient care process. Similar projects are now underway.
Effects of computerized prescriber order entry on pharmacy order-processing time.
Wietholter, Jon; Sitterson, Susan; Allison, Steven
2009-08-01
The effect of computerized prescriber order entry (CPOE) on the efficiency of medication-order-processing time was evaluated. This study was conducted at a 761-bed, tertiary care hospital. A total of 2988 medication orders were collected and analyzed before (n = 1488) and after CPOE implementation (n = 1500). Data analyzed included the time the prescriber ordered the medication, the time the pharmacy received the order, and the time the order was completed by a pharmacist. The mean order-processing time before CPOE implementation was 115 minutes from prescriber composition to pharmacist verification. After CPOE implementation, the mean order-processing time was reduced to 3 minutes (p < 0.0001). The time that an order was received by the pharmacy to the time it was verified by a pharmacist was reduced from 31 minutes before CPOE implementation to 3 minutes after CPOE implementation (p < 0.0001). The implementation of CPOE reduced the order-processing time (from order composition to verification) by 97%. Additionally, pharmacy-specific order-processing time (from order receipt in the pharmacy to pharmacist verification) was reduced by 90%. This reduction in order-processing time improves patient care by shortening the interval between physician prescribing and medication availability and may allow pharmacists to explore opportunities for enhanced clinical activities that will further positively impact patient care. CPOE implementation reduced the mean pharmacy order-processing time from composition to verification by 97%. After CPOE implementation, a new medication order was verified as appropriate by a pharmacist in three minutes, on average.
Moran, John L; Solomon, Patricia J
2013-05-24
Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Monthly mean raw mortality (at hospital discharge) time series, 1995-2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) "in-control" status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues.
Topology-dependent density optima for efficient simultaneous network exploration
NASA Astrophysics Data System (ADS)
Wilson, Daniel B.; Baker, Ruth E.; Woodhouse, Francis G.
2018-06-01
A random search process in a networked environment is governed by the time it takes to visit every node, termed the cover time. Often, a networked process does not proceed in isolation but competes with many instances of itself within the same environment. A key unanswered question is how to optimize this process: How many concurrent searchers can a topology support before the benefits of parallelism are outweighed by competition for space? Here, we introduce the searcher-averaged parallel cover time (APCT) to quantify these economies of scale. We show that the APCT of the networked symmetric exclusion process is optimized at a searcher density that is well predicted by the spectral gap. Furthermore, we find that nonequilibrium processes, realized through the addition of bias, can support significantly increased density optima. Our results suggest alternative hybrid strategies of serial and parallel search for efficient information gathering in social interaction and biological transport networks.
Circadian phase and its relationship to nighttime sleep in toddlers.
LeBourgeois, Monique K; Carskadon, Mary A; Akacem, Lameese D; Simpkin, Charles T; Wright, Kenneth P; Achermann, Peter; Jenni, Oskar G
2013-10-01
Circadian phase and its relation to sleep are increasingly recognized as fundamental factors influencing human physiology and behavior. Dim light melatonin onset (DLMO) is a reliable marker of the timing of the circadian clock, which has been used in experimental, clinical, and descriptive studies in the past few decades. Although DLMO and its relationship to sleep have been well documented in school-aged children, adolescents, and adults, very little is known about these processes in early childhood. The purpose of this study was 1) to describe circadian phase and phase angles of entrainment in toddlers and 2) to examine associations between DLMO and actigraphic measures of children's nighttime sleep. Participants were 45 healthy toddlers aged 30 to 36 months (33.5 ± 2.2 months; 21 females). After sleeping on a parent-selected schedule for 5 days (assessed with actigraphy and diaries), children participated in an in-home DLMO assessment involving the collection of saliva samples every 30 minutes for 6 hours. Average bedtime was 2015 ± 0036 h, average sleep onset time was 2043 ± 0043 h, average midsleep time was 0143 ± 0038 h, and average wake time was 0644 ± 0042 h. Average DLMO was 1929 ± 0051 h, with a 3.5-hour range. DLMO was normally distributed; however, the distribution of the bedtime, sleep onset time, and midsleep phase angles of entrainment were skewed. On average, DLMO occurred 47.8 ± 47.6 minutes (median = 39.4 minutes) before bedtime, 74.6 ± 48.0 minutes (median = 65.4 minutes) before sleep onset time, 6.2 ± 0.7 hours (median = 6.1 hours) before midsleep time, and 11.3 ± 0.7 hours before wake time. Toddlers with later DLMOs had later bedtimes (r = 0.46), sleep onset times (r = 0.51), midsleep times (r = 0.66), and wake times (r = 0.65) (all p < 0.001). Interindividual differences in toddlers' circadian phase are large and associated with their sleep timing. The early DLMOs of toddlers indicate a maturational delay in the circadian timing system between early childhood and adolescence. These findings are a first step in describing the fundamental properties of the circadian system in toddlers and have important implications for understanding the emergence of sleep problems and the consequences of circadian misalignment in early childhood.
Time-Lapse Acoustic Impedance Inversion in CO2 Sequestration Study (Weyburn Field, Canada)
NASA Astrophysics Data System (ADS)
Wang, Y.; Morozov, I. B.
2016-12-01
Acoustic-impedance (AI) pseudo-logs are useful for characterising subtle variations of fluid content during seismic monitoring of reservoirs undergoing enhanced oil recovery and/or geologic CO2 sequestration. However, highly accurate AI images are required for time-lapse analysis, which may be difficult to achieve with conventional inversion approaches. In this study, two enhancements of time-lapse AI analysis are proposed. First, a well-known uncertainty of AI inversion is caused by the lack of low-frequency signal in reflection seismic data. To resolve this difficulty, we utilize an integrated AI inversion approach combining seismic data, acoustic well logs and seismic-processing velocities. The use of well logs helps stabilizing the recursive AI inverse, and seismic-processing velocities are used to complement the low-frequency information in seismic records. To derive the low-frequency AI from seismic-processing velocity data, an empirical relation is determined by using the available acoustic logs. This method is simple and does not require subjective choices of parameters and regularization schemes as in the more sophisticated joint inversion methods. The second improvement to accurate time-lapse AI imaging consists in time-variant calibration of reflectivity. Calibration corrections consist of time shifts, amplitude corrections, spectral shaping and phase rotations. Following the calibration, average and differential reflection amplitudes are calculated, from which the average and differential AI are obtained. The approaches are applied to a time-lapse 3-D 3-C dataset from Weyburn CO2 sequestration project in southern Saskatchewan, Canada. High quality time-lapse AI volumes are obtained. Comparisons with traditional recursive and colored AI inversions (obtained without using seismic-processing velocities) show that the new method gives a better representation of spatial AI variations. Although only early stages of monitoring seismic data are available, time-lapse AI variations mapped within and near the reservoir zone suggest correlations with CO2 injection. By extending this procedure to elastic impedances, additional constraints on the variations of physical properties within the reservoir can be obtained.
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2013 CFR
2013-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
40 CFR 63.8005 - What requirements apply to my process vessels?
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature, as required by § 63.1257(d)(3)(iii)(B), you may elect to measure the liquid temperature in the... the daily averages specified in § 63.998(b)(3). An operating block is a period of time that is equal to the time from the beginning to end of an emission episode or sequence of emission episodes. (g...
Energy from vascular plant wastewater treatment systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolverton, B.C.; McDonald, R.C.
1981-04-01
Water hyacinth (Eichhornia crassipes) duckweed (Spirodela sp. and Lemna sp.), water pennywort (Hydrocotyle ranunculoides), and kudzu (Pueraria lobata) were anaerobically fermented using an anaerobic filter technique that reduced the total digestion time from 90 d to an average of 23 d and produced 0.14 to 0.22 m/sup 3/ CH/sub 4//kg (dry weight) (2.3 to 3.6 ft/sup 3//lb) from mature filters for the 3 aquatic species. Kudzu required an average digestion time of 33 d and produced an average of 0.21 m/sup 3/ CH/sub 4//kg (dry weight) (3.4 ft/sup 3//lb). The anaerobic filter provided a large surface area for the anaerobicmore » bacteria to establish and maintain an optimal balance of facultative, acid-forming, and methane-producing bacteria. Consequently the efficiency of the process was greatly improved over prior batch fermentations.« less
Adaptive pitch control for variable speed wind turbines
Johnson, Kathryn E [Boulder, CO; Fingersh, Lee Jay [Westminster, CO
2012-05-08
An adaptive method for adjusting blade pitch angle, and controllers implementing such a method, for achieving higher power coefficients. Average power coefficients are determined for first and second periods of operation for the wind turbine. When the average power coefficient for the second time period is larger than for the first, a pitch increment, which may be generated based on the power coefficients, is added (or the sign is retained) to the nominal pitch angle value for the wind turbine. When the average power coefficient for the second time period is less than for the first, the pitch increment is subtracted (or the sign is changed). A control signal is generated based on the adapted pitch angle value and sent to blade pitch actuators that act to change the pitch angle of the wind turbine to the new or modified pitch angle setting, and this process is iteratively performed.
Decreased mortality associated with prompt Gram staining of blood cultures.
Barenfanger, Joan; Graham, Donald R; Kolluri, Lavanya; Sangwan, Gaurav; Lawhorn, Jerry; Drake, Cheryl A; Verhulst, Steven J; Peterson, Ryan; Moja, Lauren B; Ertmoed, Matthew M; Moja, Ashley B; Shevlin, Douglas W; Vautrain, Robert; Callahan, Charles D
2008-12-01
Gram stains of positive blood cultures are the most important factor influencing appropriate therapy. The sooner appropriate therapy is initiated, the better. Therefore, it is reasonable to expect that the sooner Gram stains are performed, the better. To determine the value of timely Gram stains and whether improvement in Gram stain turnaround time (TAT) is feasible, we compared data for matched pairs of patients with cultures processed promptly (<1 hour TAT) with data for patients with cultures not processed promptly (> or =1 hour TAT) and then monitored TAT by control charting.In 99 matched pairs, average difference in time to detection of positive blood cultures within a pair of patients was less than 0.1 hour. For the less than 1 hour TAT group, the average TAT and crude mortality were 0.1 hour and 10.1%, respectively; for the 1 hour or longer TAT group, they were 3.3 hours and 19.2%, respectively (P < .0001 and P = .0389, respectively). After multifaceted efforts, we achieved significant improvement in the TAT for Gram stains.
Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks.
Gul, Omer Melih; Demirekler, Mubeccel
2017-09-26
This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes.
Average Throughput Performance of Myopic Policy in Energy Harvesting Wireless Sensor Networks
Demirekler, Mubeccel
2017-01-01
This paper considers a single-hop wireless sensor network where a fusion center collects data from M energy harvesting wireless sensors. The harvested energy is stored losslessly in an infinite-capacity battery at each sensor. In each time slot, the fusion center schedules K sensors for data transmission over K orthogonal channels. The fusion center does not have direct knowledge on the battery states of sensors, or the statistics of their energy harvesting processes. The fusion center only has information of the outcomes of previous transmission attempts. It is assumed that the sensors are data backlogged, there is no battery leakage and the communication is error-free. An energy harvesting sensor can transmit data to the fusion center whenever being scheduled only if it has enough energy for data transmission. We investigate average throughput of Round-Robin type myopic policy both analytically and numerically under an average reward (throughput) criterion. We show that Round-Robin type myopic policy achieves optimality for some class of energy harvesting processes although it is suboptimal for a broad class of energy harvesting processes. PMID:28954420
Apparent multifractality of self-similar Lévy processes
NASA Astrophysics Data System (ADS)
Zamparo, Marco
2017-07-01
Scaling properties of time series are usually studied in terms of the scaling laws of empirical moments, which are the time average estimates of moments of the dynamic variable. Nonlinearities in the scaling function of empirical moments are generally regarded as a sign of multifractality in the data. We show that, except for the Brownian motion, this method fails to disclose the correct monofractal nature of self-similar Lévy processes. We prove that for this class of processes it produces apparent multifractality characterised by a piecewise-linear scaling function with two different regimes, which match at the stability index of the considered process. This result is motivated by previous numerical evidence. It is obtained by introducing an appropriate stochastic normalisation which is able to cure empirical moments, without hiding their dependence on time, when moments they aim at estimating do not exist.
NASA Astrophysics Data System (ADS)
Patil, Vishal; Liburdy, James
2012-11-01
Turbulent porous media flows are encountered in catalytic bed reactors and heat exchangers. Dispersion and mixing properties of these flows play an essential role in efficiency and performance. In an effort to understand these flows, pore scale time resolved PIV measurements in a refractive index matched porous bed were made. Pore Reynolds numbers, based on hydraulic diameter and pore average velocity, were varied from 400-4000. Jet-like flows and recirculation regions associated with large scale structures were found to exist. Coherent vortical structures which convect at approximately 0.8 times the pore average velocity were identified. These different flow regions exhibited different turbulent characteristics and hence contributed unequally to global transport properties of the bed. The heterogeneity present within a pore and also from pore to pore can be accounted for in estimating transport properties using the method of volume averaging. Eddy viscosity maps and mean velocity field maps, both obtained from PIV measurements, along with the method of volume averaging were used to predict the dispersion tensor versus Reynolds number. Asymptotic values of dispersion compare well to existing correlations. The role of molecular diffusion was explored by varying the Schmidt number and molecular diffusion was found to play an important role in tracer transport, especially in recirculation regions. Funding by NSF grant 0933857, Particulate and Multiphase Processing.
Yu, Xi; Raney, Talia; Perdue, Meaghan V; Zuk, Jennifer; Ozernov-Palchik, Ola; Becker, Bryce L C; Raschle, Nora M; Gaab, Nadine
2018-05-01
Numerous studies have shown that phonological skills are critical for successful reading acquisition. However, how the brain network supporting phonological processing evolves and how it supports the initial course of learning to read is largely unknown. Here, for the first time, we characterized the emergence of the phonological network in 28 children over three stages (prereading, beginning reading, and emergent reading) longitudinally. Across these three time points, decreases in neural activation in the left inferior parietal cortex (LIPC) were observed during an audiovisual phonological processing task, suggesting a specialization process in response to reading instruction/experience. Furthermore, using the LIPC as the seed, a functional network consisting of the left inferior frontal, left posterior occipitotemporal, and right angular gyri was identified. The connection strength in this network co-developed with the growth of phonological skills. Moreover, children with above-average gains in phonological processing showed a significant developmental increase in connection strength in this network longitudinally, while children with below-average gains in phonological processing exhibited the opposite trajectory. Finally, the connection strength between the LIPC and the left posterior occipitotemporal cortex at the prereading level significantly predicted reading performance at the emergent reading stage. Our findings highlight the importance of the early emerging phonological network for reading development, providing direct evidence for the Interactive Specialization Theory and neurodevelopmental models of reading. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Elkatlawy, Saeid; Gomariz, María.; Soto-Sánchez, Cristina; Martínez Navarrete, Gema; Fernández, Eduardo; Fimia, Antonio
2014-05-01
In this paper we report on the use of digital holographic microscopy for 3D real time imaging of cultured neurons and neural networks, in vitro. Digital holographic microscopy is employed as an assessment tool to study the biophysical origin of neurodegenerative diseases. Our study consists in the morphological characterization of the axon, dendrites and cell bodies. The average size and thickness of the soma were 21 and 13 μm, respectively. Furthermore, the average size and diameter of some randomly selected neurites were 4.8 and 0.89 μm, respectively. In addition, the spatiotemporal growth process of cellular bodies and extensions was fitted to by a non-linear behavior of the nerve system. Remarkably, this non-linear process represents the relationship between the growth process of cellular body with respect to the axon and dendrites of the neurons.
Direct Thermodynamic Measurements of the Energetics of Information Processing
2017-08-08
Report: Direct thermodynamic measurements of the energetics of information processing The views, opinions and/or findings contained in this report are... information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...maintaining the data needed, and completing and reviewing the collection of information . Send comments regarding this burden estimate or any other
Dynamical description of the fission process using the TD-BCS theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scamps, Guillaume, E-mail: scamps@nucl.phys.tohoku.ac.jp; Simenel, Cédric; Lacroix, Denis
2015-10-15
The description of fission remains a challenge for nuclear microscopic theories. The time-dependent Hartree-Fock approach with BCS pairing is applied to study the last stage of the fission process. A good agreement is found for the one-body observables: the total kinetic energy and the average mass asymmetry. The non-physical dependence of two-body observables with the initial shape is discussed.
Scheduling periodic jobs using imprecise results
NASA Technical Reports Server (NTRS)
Chung, Jen-Yao; Liu, Jane W. S.; Lin, Kwei-Jay
1987-01-01
One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed.
A January angular momentum balance in the OSU two-level atmospheric general circulation model
NASA Technical Reports Server (NTRS)
Kim, J.-W.; Grady, W.
1982-01-01
The present investigation is concerned with an analysis of the atmospheric angular momentum balance, based on the simulation data of the Oregon State University two-level atmospheric general circulation model (AGCM). An attempt is also made to gain an understanding of the involved processes. Preliminary results on the angular momentum and mass balance in the AGCM are shown. The basic equations are examined, and questions of turbulent momentum transfer are investigated. The methods of analysis are discussed, taking into account time-averaged balance equations, time and longitude-averaged balance equations, mean meridional circulation, the mean meridional balance of relative angular momentum, and standing and transient components of motion.
A stochastic approach to noise modeling for barometric altimeters.
Sabatini, Angelo Maria; Genovese, Vincenzo
2013-11-18
The question whether barometric altimeters can be applied to accurately track human motions is still debated, since their measurement performance are rather poor due to either coarse resolution or drifting behavior problems. As a step toward accurate short-time tracking of changes in height (up to few minutes), we develop a stochastic model that attempts to capture some statistical properties of the barometric altimeter noise. The barometric altimeter noise is decomposed in three components with different physical origin and properties: a deterministic time-varying mean, mainly correlated with global environment changes, and a first-order Gauss-Markov (GM) random process, mainly accounting for short-term, local environment changes, the effects of which are prominent, respectively, for long-time and short-time motion tracking; an uncorrelated random process, mainly due to wideband electronic noise, including quantization noise. Autoregressive-moving average (ARMA) system identification techniques are used to capture the correlation structure of the piecewise stationary GM component, and to estimate its standard deviation, together with the standard deviation of the uncorrelated component. M-point moving average filters used alone or in combination with whitening filters learnt from ARMA model parameters are further tested in few dynamic motion experiments and discussed for their capability of short-time tracking small-amplitude, low-frequency motions.
NASA Astrophysics Data System (ADS)
Sakano, Toshikazu; Yamaguchi, Takahiro; Fujii, Tatsuya; Okumura, Akira; Furukawa, Isao; Ono, Sadayasu; Suzuki, Junji; Ando, Yutaka; Kohda, Ehiichi; Sugino, Yoshinori; Okada, Yoshiyuki; Amaki, Sachi
2000-05-01
We constructed a high-speed medical information network testbed, which is one of the largest testbeds in Japan, and applied it to practical medical checkups for the first time. The constructed testbed, which we call IMPACT, consists of a Super-High Definition Imaging system, a video conferencing system, a remote database system, and a 6 - 135 Mbps ATM network. The interconnected facilities include the School of Medicine in Keio University, a company's clinic, and an NTT R&D center, all in and around Tokyo. We applied IMPACT to the mass screening of the upper gastrointestinal (UGI) tract at the clinic. All 5419 radiographic images acquired at them clinic for 523 employees were digitized (2048 X 1698 X 12 bits) and transferred to a remote database in NTT. We then picked up about 50 images from five patients and sent them to nine radiological specialists at Keio University. The processing, which includes film digitization, image data transfer, and database registration, took 574 seconds per patient in average. The average reading time at Keio Univ. was 207 seconds. The overall processing time was estimated to be 781 seconds per patient. From these experimental results, we conclude that quasi-real time tele-medical checkups are possible with our prototype system.
An improvement of the Earthworm Based Earthquake Alarm Reporting system in Taiwan
NASA Astrophysics Data System (ADS)
Chen, D. Y.; Hsiao, N. C.; Yih-Min, W.
2017-12-01
The Central Weather Bureau of Taiwan (CWB) has operated the Earthworm Based Earthquake Alarm Reporting (eBEAR) system for the purpose of earthquake early warning (EEW). The system has been used to report EEW messages to the general public since 2016 through text message from the mobile phones and the television programs. The system for inland earthquakes is able to provide accurate and fast warnings. The average epicenter error is about 5 km and the processing time is about 15 seconds. The epicenter error is defined as the distance between the epicenter estimated by the EEW system and the epicenter estimated by man. The processing time is defined as the time difference between the time earthquakes occurred and the time the system issued warning. The CWB seismic network consist about 200 seismic stations. In some area of Taiwan the distance between each seismic station is about 10 km. It means that when an earthquake occurred the seismic P wave is able to propagate through 6 stations, which is the minimum number of required stations in the EEW system, within 20 km. If the latency of data transmitting is about 1 sec, the P-wave velocity is about 6 km per sec and we take 3-sec length time window to estimate earthquake magnitude, then the processing should be around 8 sec. In fact, however, the average processing time is larger than this figure. Because some outliers of P-wave onset picks may exist in the beginning of the earthquake occurrence, the Geiger's method we used in the EEW system for earthquake location is not stable. It usually takes more time to wait for enough number of good picks. In this study we used grid search method to improve the estimations of earthquake location. The MAXEL algorithm (Sheen et al., 2015, 2016) was tested in the EEW system by simulating historical earthquakes occurred in Taiwan. The results show the processing time can be reduced and the location accuracy is acceptable for EEW purpose.
Scale Dependence of Spatiotemporal Intermittence of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Siddani, Ravi K.
2011-01-01
It is a common experience that rainfall is intermittent in space and time. This is reflected by the fact that the statistics of area- and/or time-averaged rain rate is described by a mixed distribution with a nonzero probability of having a sharp value zero. In this paper we have explored the dependence of the probability of zero rain on the averaging space and time scales in large multiyear data sets based on radar and rain gauge observations. A stretched exponential fannula fits the observed scale dependence of the zero-rain probability. The proposed formula makes it apparent that the space-time support of the rain field is not quite a set of measure zero as is sometimes supposed. We also give an ex.planation of the observed behavior in tenus of a simple probabilistic model based on the premise that rainfall process has an intrinsic memory.
Digital seismo-acoustic signal processing aboard a wireless sensor platform
NASA Astrophysics Data System (ADS)
Marcillo, O.; Johnson, J. B.; Lorincz, K.; Werner-Allen, G.; Welsh, M.
2006-12-01
We are developing a low power, low-cost wireless sensor array to conduct real-time signal processing of earthquakes at active volcanoes. The sensor array, which integrates data from both seismic and acoustic sensors, is based on Moteiv TMote Sky wireless sensor nodes (www.moteiv.com). The nodes feature a Texas Instruments MSP430 microcontroller, 48 Kbytes of program memory, 10 Kbytes of static RAM, 1 Mbyte of external flash memory, and a 2.4-GHz Chipcon CC2420 IEEE 802.15.4 radio. The TMote Sky is programmed in TinyOS. Basic signal processing occurs on an array of three peripheral sensor nodes. These nodes are tied into a dedicated GPS receiver node, which is focused on time synchronization, and a central communications node, which handles data integration and additional processing. The sensor nodes incorporate dual 12-bit digitizers sampling a seismic sensor and a pressure transducer at 100 samples per second. The wireless capabilities of the system allow flexible array geometry, with a maximum aperture of 200m. We have already developed the digital signal processing routines on board the Moteiv Tmote sensor nodes. The developed routines accomplish Real-time Seismic-Amplitude Measurement (RSAM), Seismic Spectral- Amplitude Measurement (SSAM), and a user-configured Short Term Averaging / Long Term Averaging (STA LTA ratio), which is used to calculate first arrivals. The processed data from individual nodes are transmitted back to a central node, where additional processing may be performed. Such processing will include back azimuth determination and other wave field analyses. Future on-board signal processing will focus on event characterization utilizing pattern recognition and spectral characterization. The processed data is intended as low bandwidth information which can be transmitted periodically and at low cost through satellite telemetry to a web server. The processing is limited by the computational capabilities (RAM, ROM) of the nodes. Nevertheless, we envision this product to be a useful tool for assessing the state of unrest at remote volcanoes.
A Novel Process Audit for Standardized Perioperative Handoff Protocols.
Pallekonda, Vinay; Scholl, Adam T; McKelvey, George M; Amhaz, Hassan; Essa, Deanna; Narreddy, Spurthy; Tan, Jens; Templonuevo, Mark; Ramirez, Sasha; Petrovic, Michelle A
2017-11-01
A perioperative handoff protocol provides a standardized delivery of communication during a handoff that occurs from the operating room to the postanestheisa care unit or ICU. The protocol's success is dependent, in part, on its continued proper use over time. A novel process audit was developed to help ensure that a perioperative handoff protocol is used accurately and appropriately over time. The Audit Observation Form is used for the Audit Phase of the process audit, while the Audit Averages Form is used for the Data Analysis Phase. Employing minimal resources and using quantitative methods, the process audit provides the necessary means to evaluate the proper execution of any perioperative handoff protocol. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.
Level crossings and excess times due to a superposition of uncorrelated exponential pulses
NASA Astrophysics Data System (ADS)
Theodorsen, A.; Garcia, O. E.
2018-01-01
A well-known stochastic model for intermittent fluctuations in physical systems is investigated. The model is given by a superposition of uncorrelated exponential pulses, and the degree of pulse overlap is interpreted as an intermittency parameter. Expressions for excess time statistics, that is, the rate of level crossings above a given threshold and the average time spent above the threshold, are derived from the joint distribution of the process and its derivative. Limits of both high and low intermittency are investigated and compared to previously known results. In the case of a strongly intermittent process, the distribution of times spent above threshold is obtained analytically. This expression is verified numerically, and the distribution of times above threshold is explored for other intermittency regimes. The numerical simulations compare favorably to known results for the distribution of times above the mean threshold for an Ornstein-Uhlenbeck process. This contribution generalizes the excess time statistics for the stochastic model, which find applications in a wide diversity of natural and technological systems.
Non-universal tracer diffusion in crowded media of non-inert obstacles.
Ghosh, Surya K; Cherstvy, Andrey G; Metzler, Ralf
2015-01-21
We study the diffusion of a tracer particle, which moves in continuum space between a lattice of excluded volume, immobile non-inert obstacles. In particular, we analyse how the strength of the tracer-obstacle interactions and the volume occupancy of the crowders alter the diffusive motion of the tracer. From the details of partitioning of the tracer diffusion modes between trapping states when bound to obstacles and bulk diffusion, we examine the degree of localisation of the tracer in the lattice of crowders. We study the properties of the tracer diffusion in terms of the ensemble and time averaged mean squared displacements, the trapping time distributions, the amplitude variation of the time averaged mean squared displacements, and the non-Gaussianity parameter of the diffusing tracer. We conclude that tracer-obstacle adsorption and binding triggers a transient anomalous diffusion. From a very narrow spread of recorded individual time averaged trajectories we exclude continuous type random walk processes as the underlying physical model of the tracer diffusion in our system. For moderate tracer-crowder attraction the motion is found to be fully ergodic, while at stronger attraction strength a transient disparity between ensemble and time averaged mean squared displacements occurs. We also put our results into perspective with findings from experimental single-particle tracking and simulations of the diffusion of tagged tracers in dense crowded suspensions. Our results have implications for the diffusion, transport, and spreading of chemical components in highly crowded environments inside living cells and other structured liquids.
Household availability of ultra-processed foods and obesity in nineteen European countries.
Monteiro, Carlos Augusto; Moubarac, Jean-Claude; Levy, Renata Bertazzi; Canella, Daniela Silva; Louzada, Maria Laura da Costa; Cannon, Geoffrey
2018-01-01
To assess household availability of NOVA food groups in nineteen European countries and to analyse the association between availability of ultra-processed foods and prevalence of obesity. Ecological, cross-sectional study. Europe. Estimates of ultra-processed foods calculated from national household budget surveys conducted between 1991 and 2008. Estimates of obesity prevalence obtained from national surveys undertaken near the budget survey time. Across the nineteen countries, median average household availability amounted to 33·9 % of total purchased dietary energy for unprocessed or minimally processed foods, 20·3 % for processed culinary ingredients, 19·6 % for processed foods and 26·4 % for ultra-processed foods. The average household availability of ultra-processed foods ranged from 10·2 % in Portugal and 13·4 % in Italy to 46·2 % in Germany and 50·4 % in the UK. A significant positive association was found between national household availability of ultra-processed foods and national prevalence of obesity among adults. After adjustment for national income, prevalence of physical inactivity, prevalence of smoking, measured or self-reported prevalence of obesity, and time lag between estimates on household food availability and obesity, each percentage point increase in the household availability of ultra-processed foods resulted in an increase of 0·25 percentage points in obesity prevalence. The study contributes to a growing literature showing that the consumption of ultra-processed foods is associated with an increased risk of diet-related non-communicable diseases. Its findings reinforce the need for public policies and actions that promote consumption of unprocessed or minimally processed foods and make ultra-processed foods less available and affordable.
Ninety to Nothing: a PDSA quality improvement project.
Prybutok, Gayle Linda
2018-05-14
Purpose The purpose of this paper is to present a case study of a successful quality improvement project in an acute care hospital focused on reducing the time of the total patient visit in the emergency department. Design/methodology/approach A multidisciplinary quality improvement team, using the PDSA (Plan, Do, Study, Act) Cycle, analyzed the emergency department care delivery process and sequentially made process improvements that contributed to project success. Findings The average turnaround time goal of 90 minutes or less per visit was achieved in four months, and the organization enjoyed significant collateral benefits both internal to the organization and for its customers. Practical implications This successful PDSA process can be duplicated by healthcare organizations of all sizes seeking to improve a process related to timely, high-quality patient care delivery. Originality/value Extended wait time in hospital emergency departments is a universal problem in the USA that reduces the quality of the customer experience and that delays necessary patient care. This case study demonstrates that a structured quality improvement process implemented by a multidisciplinary team with the authority to make necessary process changes can successfully redefine the norm.
Determining the Uncertainty of X-Ray Absorption Measurements
Wojcik, Gary S.
2004-01-01
X-ray absorption (or more properly, x-ray attenuation) techniques have been applied to study the moisture movement in and moisture content of materials like cement paste, mortar, and wood. An increase in the number of x-ray counts with time at a location in a specimen may indicate a decrease in moisture content. The uncertainty of measurements from an x-ray absorption system, which must be known to properly interpret the data, is often assumed to be the square root of the number of counts, as in a Poisson process. No detailed studies have heretofore been conducted to determine the uncertainty of x-ray absorption measurements or the effect of averaging data on the uncertainty. In this study, the Poisson estimate was found to adequately approximate normalized root mean square errors (a measure of uncertainty) of counts for point measurements and profile measurements of water specimens. The Poisson estimate, however, was not reliable in approximating the magnitude of the uncertainty when averaging data from paste and mortar specimens. Changes in uncertainty from differing averaging procedures were well-approximated by a Poisson process. The normalized root mean square errors decreased when the x-ray source intensity, integration time, collimator size, and number of scanning repetitions increased. Uncertainties in mean paste and mortar count profiles were kept below 2 % by averaging vertical profiles at horizontal spacings of 1 mm or larger with counts per point above 4000. Maximum normalized root mean square errors did not exceed 10 % in any of the tests conducted. PMID:27366627
Space-time interpolation of satellite winds in the tropics
NASA Astrophysics Data System (ADS)
Patoux, Jérôme; Levy, Gad
2013-09-01
A space-time interpolator for creating average geophysical fields from satellite measurements is presented and tested. It is designed for optimal spatiotemporal averaging of heterogeneous data. While it is illustrated with satellite surface wind measurements in the tropics, the methodology can be useful for interpolating, analyzing, and merging a wide variety of heterogeneous and satellite data in the atmosphere and ocean over the entire globe. The spatial and temporal ranges of the interpolator are determined by averaging satellite and in situ measurements over increasingly larger space and time windows and matching the corresponding variability at each scale. This matching provides a relationship between temporal and spatial ranges, but does not provide a unique pair of ranges as a solution to all averaging problems. The pair of ranges most appropriate for a given application can be determined by performing a spectral analysis of the interpolated fields and choosing the smallest values that remove any or most of the aliasing due to the uneven sampling by the satellite. The methodology is illustrated with the computation of average divergence fields over the equatorial Pacific Ocean from SeaWinds-on-QuikSCAT surface wind measurements, for which 72 h and 510 km are suggested as optimal interpolation windows. It is found that the wind variability is reduced over the cold tongue and enhanced over the Pacific warm pool, consistent with the notion that the unstably stratified boundary layer has generally more variable winds and more gustiness than the stably stratified boundary layer. It is suggested that the spectral analysis optimization can be used for any process where time-space correspondence can be assumed.
An Approach to Improved Credibility of CFD Simulations for Rocket Injector Design
NASA Technical Reports Server (NTRS)
Tucker, Paul K.; Menon, Suresh; Merkle, Charles L.; Oefelein, Joseph C.; Yang, Vigor
2007-01-01
Computational fluid dynamics (CFD) has the potential to improve the historical rocket injector design process by simulating the sensitivity of performance and injector-driven thermal environments to. the details of the injector geometry and key operational parameters. Methodical verification and validation efforts on a range of coaxial injector elements have shown the current production CFD capability must be improved in order to quantitatively impact the injector design process.. This paper documents the status of an effort to understand and compare the predictive capabilities and resource requirements of a range of CFD methodologies on a set of model problem injectors. Preliminary results from a steady Reynolds-Average Navier-Stokes (RANS), an unsteady Reynolds-Average Navier Stokes (URANS) and three different Large Eddy Simulation (LES) techniques used to model a single element coaxial injector using gaseous oxygen and gaseous hydrogen propellants are presented. Initial observations are made comparing instantaneous results, corresponding time-averaged and steady-state solutions in the near -injector flow field. Significant differences in the flow fields exist, as expected, and are discussed. An important preliminary result is the identification of a fundamental mixing mechanism, accounted for by URANS and LES, but missing in the steady BANS methodology. Since propellant mixing is the core injector function, this mixing process may prove to have a profound effect on the ability to more correctly simulate injector performance and resulting thermal environments. Issues important to unifying the basis for future comparison such as solution initialization, required run time and grid resolution are addressed.
NASA Astrophysics Data System (ADS)
Mishra, C.; Samantaray, A. K.; Chakraborty, G.
2016-05-01
Rolling element bearings are widely used in rotating machines and their faults can lead to excessive vibration levels and/or complete seizure of the machine. Under special operating conditions such as non-uniform or low speed shaft rotation, the available fault diagnosis methods cannot be applied for bearing fault diagnosis with full confidence. Fault symptoms in such operating conditions cannot be easily extracted through usual measurement and signal processing techniques. A typical example is a bearing in heavy rolling mill with variable load and disturbance from other sources. In extremely slow speed operation, variation in speed due to speed controller transients or external disturbances (e.g., varying load) can be relatively high. To account for speed variation, instantaneous angular position instead of time is used as the base variable of signals for signal processing purposes. Even with time synchronous averaging (TSA) and well-established methods like envelope order analysis, rolling element faults in rolling element bearings cannot be easily identified during such operating conditions. In this article we propose to use order tracking on the envelope of the wavelet de-noised estimate of the short-duration angle synchronous averaged signal to diagnose faults in rolling element bearing operating under the stated special conditions. The proposed four-stage sequential signal processing method eliminates uncorrelated content, avoids signal smearing and exposes only the fault frequencies and its harmonics in the spectrum. We use experimental data1
78 FR 37601 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... the Commission receives 2 of the most time-consuming applications annually, 4 applications of medium... of $200,000 on average to engage in the exemptive order application process, including preparation... applications. See also Political Contributions by Certain Investment Advisers, Investment Advisers Act Release...
Robust estimation of event-related potentials via particle filter.
Fukami, Tadanori; Watanabe, Jun; Ishikawa, Fumito
2016-03-01
In clinical examinations and brain-computer interface (BCI) research, a short electroencephalogram (EEG) measurement time is ideal. The use of event-related potentials (ERPs) relies on both estimation accuracy and processing time. We tested a particle filter that uses a large number of particles to construct a probability distribution. We constructed a simple model for recording EEG comprising three components: ERPs approximated via a trend model, background waves constructed via an autoregressive model, and noise. We evaluated the performance of the particle filter based on mean squared error (MSE), P300 peak amplitude, and latency. We then compared our filter with the Kalman filter and a conventional simple averaging method. To confirm the efficacy of the filter, we used it to estimate ERP elicited by a P300 BCI speller. A 400-particle filter produced the best MSE. We found that the merit of the filter increased when the original waveform already had a low signal-to-noise ratio (SNR) (i.e., the power ratio between ERP and background EEG). We calculated the amount of averaging necessary after applying a particle filter that produced a result equivalent to that associated with conventional averaging, and determined that the particle filter yielded a maximum 42.8% reduction in measurement time. The particle filter performed better than both the Kalman filter and conventional averaging for a low SNR in terms of both MSE and P300 peak amplitude and latency. For EEG data produced by the P300 speller, we were able to use our filter to obtain ERP waveforms that were stable compared with averages produced by a conventional averaging method, irrespective of the amount of averaging. We confirmed that particle filters are efficacious in reducing the measurement time required during simulations with a low SNR. Additionally, particle filters can perform robust ERP estimation for EEG data produced via a P300 speller. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Variations in the OM/OC ratio of urban organic aerosol next to a major roadway.
Brown, Steven G; Lee, Taehyoung; Roberts, Paul T; Collett, Jeffrey L
2013-12-01
Understanding the organic matter/organic carbon (OM/OC) ratio in ambient particulate matter (PM) is critical to achieve mass closure in routine PM measurements, to assess the sources of and the degree of chemical processing organic aerosol particles have undergone, and to relate ambient pollutant concentrations to health effects. Of particular interest is how the OM/OC ratio varies in the urban environment, where strong spatial and temporal gradients in source emissions are common. We provide results of near-roadway high-time-resolution PM1 OM concentration and OM/OC ratio observations during January 2008 at Fyfe Elementary School in Las Vegas, NV, 18 m from the U.S. 95 freeway soundwall, measured with an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-AMS). The average OM/OC ratio was 1.54 (+/- 0.20 standard deviation), typical of environments with a low amount of secondary aerosol formation. The 2-min average OM/OC ratios varied between 1.17 and 2.67, and daily average OM/OC ratios varied between 1.44 and 1.73. The ratios were highest during periods of low OM concentrations and generally low during periods of high OM concentrations. OM/OC ratios were low (1.52 +/- 0.14, on average) during the morning rush hour (average OM = 2.4 microg/m3), when vehicular emissions dominate this near-road measurement site. The ratios were slightly lower (1.46 +/- 0.10) in the evening (average OM = 6.3 microg/m3), when a combination of vehicular and fresh residential biomass burning emissions was typically present during times with temperature inversions. The hourly averaged OM/OC ratio peaked at 1.66 at midday. OM concentrations were similar regardless of whether the monitoring site was downwind or upwind of the adjacent freeway throughout the day, though they were higher during stagnant conditions (wind speed < 0.5 m/sec). The OM/OC ratio generally varied more with time of day than with wind direction and speed.
Child Adjustment and Parent Functioning: Considering the Role of Child Driven Effects
Yan, Ni; Ansari, Arya
2016-01-01
Based on 13,694 mother-child dyads from the Early Childhood Longitudinal Kindergarten Cohort (ECLS-K), this study examined the bidirectional relations between parental and child functioning from kindergarten through third grade. Results from the cross-lagged models demonstrated that child-driven effects co-occurred with parental effects and these effects were comparable in size. At the same time, however, results from the latent profile analysis revealed idiosyncratic patterns of parent and child functioning. Compared with children in the least optimal functioning profiles, those in the average and above average profiles elicited greater improvements in parents’ functioning over time. Although children characterized by poor academic performance at kindergarten appeared to precede parents characterized by harsh parenting at third grade, there was a threshold in the evolving strength of the overall child-driven effects. Taken together, the results from this study underscore the importance of considering reciprocal processes in the parent-child dynamic while also underscoring individual differences in these processes across the early to middle childhood years. PMID:26866838
The time course of saccadic decision making: dynamic field theory.
Wilimzig, Claudia; Schneider, Stefan; Schöner, Gregor
2006-10-01
Making a saccadic eye movement involves two decisions, the decision to initiate the saccade and the selection of the visual target of the saccade. Here we provide a theoretical account for the time-courses of these two processes, whose instabilities are the basis of decision making. We show how the cross-over from spatial averaging for fast saccades to selection for slow saccades arises from the balance between excitatory and inhibitory processes. Initiating a saccade involves overcoming fixation, as can be observed in the countermanding paradigm, which we model accounting both for the temporal evolution of the suppression probability and its dependence on fixation activity. The interaction between the two forms of decision making is demonstrated by predicting how the cross-over from averaging to selection depends on the fixation stimulus in gap-step-overlap paradigms. We discuss how the activation dynamics of our model may be mapped onto neuronal structures including the motor map and the fixation cells in superior colliculus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweedie, A.; Doris, E.
Establishing interconnection to the grid is a recognized barrier to the deployment of distributed energy generation. This report compares interconnection processes for photovoltaic projects in California and Germany. This report summarizes the steps of the interconnection process for developers and utilities, the average length of time utilities take to process applications, and paperwork required of project developers. Based on a review of the available literature, this report finds that while the interconnection procedures and timelines are similar in California and Germany, differences in the legal and regulatory frameworks are substantial.
The stretch to stray on time: Resonant length of random walks in a transient
NASA Astrophysics Data System (ADS)
Falcke, Martin; Friedhoff, Victor Nicolai
2018-05-01
First-passage times in random walks have a vast number of diverse applications in physics, chemistry, biology, and finance. In general, environmental conditions for a stochastic process are not constant on the time scale of the average first-passage time or control might be applied to reduce noise. We investigate moments of the first-passage time distribution under an exponential transient describing relaxation of environmental conditions. We solve the Laplace-transformed (generalized) master equation analytically using a novel method that is applicable to general state schemes. The first-passage time from one end to the other of a linear chain of states is our application for the solutions. The dependence of its average on the relaxation rate obeys a power law for slow transients. The exponent ν depends on the chain length N like ν = - N / ( N + 1 ) to leading order. Slow transients substantially reduce the noise of first-passage times expressed as the coefficient of variation (CV), even if the average first-passage time is much longer than the transient. The CV has a pronounced minimum for some lengths, which we call resonant lengths. These results also suggest a simple and efficient noise control strategy and are closely related to the timing of repetitive excitations, coherence resonance, and information transmission by noisy excitable systems. A resonant number of steps from the inhibited state to the excitation threshold and slow recovery from negative feedback provide optimal timing noise reduction and information transmission.
Isaacson, Dylan; Ahmad, Tessnim; Metzler, Ian; Tzou, David T; Taguchi, Kazumi; Usawachintachit, Manint; Zetumer, Samuel; Sherer, Benjamin; Stoller, Marshall; Chi, Thomas
2017-10-01
Careful decontamination and sterilization of reusable flexible ureteroscopes used in ureterorenoscopy cases prevent the spread of infectious pathogens to patients and technicians. However, inefficient reprocessing and unavailability of ureteroscopes sent out for repair can contribute to expensive operating room (OR) delays. Time-driven activity-based costing (TDABC) was applied to describe the time and costs involved in reprocessing. Direct observation and timing were performed for all steps in reprocessing of reusable flexible ureteroscopes following operative procedures. Estimated times needed for each step by which damaged ureteroscopes identified during reprocessing are sent for repair were characterized through interviews with purchasing analyst staff. Process maps were created for reprocessing and repair detailing individual step times and their variances. Cost data for labor and disposables used were applied to calculate per minute and average step costs. Ten ureteroscopes were followed through reprocessing. Process mapping for ureteroscope reprocessing averaged 229.0 ± 74.4 minutes, whereas sending a ureteroscope for repair required an estimated 143 minutes per repair. Most steps demonstrated low variance between timed observations. Ureteroscope drying was the longest and highest variance step at 126.5 ± 55.7 minutes and was highly dependent on manual air flushing through the ureteroscope working channel and ureteroscope positioning in the drying cabinet. Total costs for reprocessing totaled $96.13 per episode, including the cost of labor and disposable items. Utilizing TDABC delineates the full spectrum of costs associated with ureteroscope reprocessing and identifies areas for process improvement to drive value-based care. At our institution, ureteroscope drying was one clearly identified target area. Implementing training in ureteroscope drying technique could save up to 2 hours per reprocessing event, potentially preventing expensive OR delays.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
Melanson, Stacy E F; Goonan, Ellen M; Lobo, Margaret M; Baum, Jonathan M; Paredes, José D; Santos, Katherine S; Gustafson, Michael L; Tanasijevic, Milenko J
2009-12-01
Our goals were to improve the overall patient experience and optimize the blood collection process in outpatient phlebotomy using Lean principles. Elimination of non-value-added steps and modifications to operational processes resulted in increased capacity to handle workload during peak times without adding staff. The result was a reduction of average patient wait time from 21 to 5 minutes, with the goal of drawing blood samples within 10 minutes of arrival at the phlebotomy station met for 90% of patients. In addition, patient satisfaction increased noticeably as assessed by a 5-question survey. The results have been sustained for 10 months with staff continuing to make process improvements.
A Robust and Resilient Network Design Paradigm for Region-Based Faults Inflicted by WMD Attack
2016-04-01
MEASUREMENTS FOR GRID MONITORING AND CONTROL AGAINST POSSIBLE WMD ATTACKS We investigated big data processing of PMU measurements for grid monitoring and...control against possible WMD attacks. Big data processing and analytics of synchrophasor measurements, collected from multiple locations of power grids...collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources
Enhancing Quality of Orthotic Services with Process and Outcome Information
2017-10-01
AWARD NUMBER: W81XWH-16-1-0788 TITLE: Enhancing Quality of Orthotic Services with Process and Outcome Information PRINCIPAL INVESTIGATOR...OMB No. 0704-0188 Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for... information . Send comments regarding this burden estimate or any other aspect of this collection of information , including suggestions for reducing this
Fahmi, Fahmi; Nasution, Tigor H; Anggreiny, Anggreiny
2017-01-01
The use of medical imaging in diagnosing brain disease is growing. The challenges are related to the big size of data and complexity of the image processing. High standard of hardware and software are demanded, which can only be provided in big hospitals. Our purpose was to provide a smart cloud system to help diagnosing brain diseases for hospital with limited infrastructure. The expertise of neurologists was first implanted in cloud server to conduct an automatic diagnosis in real time using image processing technique developed based on ITK library and web service. Users upload images through website and the result, in this case the size of tumor was sent back immediately. A specific image compression technique was developed for this purpose. The smart cloud system was able to measure the area and location of tumors, with average size of 19.91 ± 2.38 cm2 and an average response time 7.0 ± 0.3 s. The capability of the server decreased when multiple clients accessed the system simultaneously: 14 ± 0 s (5 parallel clients) and 27 ± 0.2 s (10 parallel clients). The cloud system was successfully developed to process and analyze medical images for diagnosing brain diseases in this case for tumor.
Hockenberry, Marilyn J; Krull, Kevin R; Insel, Kathleen C; Harris, Lynnette L; Gundy, Patricia M; Adkins, Kristin B; Pasvogel, Alice E; Taylor, Olga A; Koerner, Kari M; Montgomery, David W; Ross, Adam K; Hill, Adam; Moore, Ida M
2015-09-01
To examine associations among oxidative stress, fine and visual-motor abilities, and behavioral adjustment in children receiving chemotherapy for acute lymphoblastic leukemia (ALL) . A prospective, repeated-measures design . Two pediatric oncology settings in the southwestern United States. 89 children with ALL were followed from diagnosis to the end of chemotherapy. Serial cerebrospinal fluid samples were collected during scheduled lumbar punctures and analyzed for oxidative stress biomarkers. Children completed fine motor dexterity, visual processing speed, and visual-motor integration measures at three time points. Parents completed child behavior ratings at the same times. Oxidative stress, fine motor dexterity, visual processing, visual-motor integration, and behavioral adjustment . Children with ALL had below-average fine motor dexterity, visual processing speed, and visual-motor integration following the induction phase of ALL therapy. By end of therapy, visual processing speed normalized, and fine motor dexterity and visual-motor integration remained below average. Oxidative stress measures correlated with fine motor dexterity and visual-motor integration. Decreased motor functioning was associated with increased hyperactivity and anxiety . Oxidative stress occurs following chemo-therapy for childhood ALL and is related to impaired fine motor skills and visual symptoms . Early intervention should be considered to prevent fine motor and visual-spatial deficits, as well as behavioral problems.
Can Time of Implant Placement influence Bone Remodeling?
Rafael, Caroline F; Passoni, Bernardo; Araúio, Carlos; de Araúio, Maria A; Benfatti, César; Volpato, Claudia
2016-04-01
Since the alveolar process is tissue "dental dependent," after the extraction of the dental element, this process suffers some degree of atrophy during the healing process, which can be reduced with the installation of immediate implants, aiming to maintain the original bone architecture. The aim of this study was to investigate the influence of the time of implant placement on bone formation around them. Seven dogs were selected and randomly divided into two groups: Group 1, where implants were placed immediately after extraction of two lower premolars without flap elevation, and group 2, where implants were delayed by 4 months after extractions. Each group received 14 implants, and 4 months after the second surgery, the samples were processed and analyzed histomorphometrically. A mean average analysis and the Kruskal-Wallis test (p < 0.05) were performed. The buccal bone-implant contact (BIC) mean average was found larger in immediate implants (42.61%) compared with delayed implants (37.69%). Group 1 had statistically higher outcomes in bone formation and BIC on the buccal bone wall. It was concluded that performing immediate implants with the palatal approach technique and leaving a buccal GAP enables a higher or at least equal rate to BIC and bone area around them, when compared with delayed implants. Actually, the patients and dentists want to do a shorter treatment with satisfactory results, but it is necessary to understand whether different times of implant placement can influence the results and longevity of the treatment.
A straightforward frequency-estimation technique for GPS carrier-phase time transfer.
Hackman, Christine; Levine, Judah; Parker, Thomas E; Piester, Dirk; Becker, Jürgen
2006-09-01
Although Global Positioning System (GPS) carrier-phase time transfer (GPSCPTT) offers frequency stability approaching 10-15 at averaging times of 1 d, a discontinuity occurs in the time-transfer estimates between the end of one processing batch (1-3 d in length) and the beginning of the next. The average frequency over a multiday analysis period often has been computed by first estimating and removing these discontinuities, i.e., through concatenation. We present a new frequency-estimation technique in which frequencies are computed from the individual batches then averaged to obtain the mean frequency for a multiday period. This allows the frequency to be computed without the uncertainty associated with the removal of the discontinuities and requires fewer computational resources. The new technique was tested by comparing the fractional frequency-difference values it yields to those obtained using a GPSCPTT concatenation method and those obtained using two-way satellite time-and-frequency transfer (TWSTFT). The clocks studied were located in Braunschweig, Germany, and in Boulder, CO. The frequencies obtained from the GPSCPTT measurements using either method agreed with those obtained from TWSTFT at several parts in 1016. The frequency values obtained from the GPSCPTT data by use of the new method agreed with those obtained using the concatenation technique at 1-4 x 10(-16).
Messer, C; Zander, A; Arnolds, I V; Nickel, S; Schuster, M
2015-12-01
In most hospitals the operating rooms (OR) are separated from the rest of the hospital by transfer rooms where patients have to pass through for reasons of hygiene. In the OR transfer room patients are placed on the OR table before surgery and returned to the hospital bed after surgery. It could happen that the number of patients who need to pass through a transfer room at a certain point in time exceed the number of available transfer rooms. As a result the transfer rooms become a bottleneck where patients have to wait and which, in turn, may lead to delays in the OR suite. In this study the ability of a discrete event simulation to analyze the effect of the duration of surgery and the number of ORs on the number of OR transfer rooms needed was investigated. This study was based on a discrete event simulation model developed with the simulation software AnyLogic®. The model studied the effects of the number of OR transfer rooms on the processes in an OR suite of a community hospital by varying the number of ORs from one to eight and using different surgical portfolios. Probability distributions for the process duration of induction, surgery and recovery and transfer room processes were calculated on the basis of real data from the community hospital studied. Furthermore, using a generic simulation model the effect of the average duration of surgery on the number of OR transfer rooms needed was examined. The discrete event simulation model enabled the analysis of both quantitative as well as qualitative changes in the OR process and setting. Key performance indicators of the simulation model were patient throughput per day, the probability of waiting and duration of waiting time in front of OR transfer rooms. In the case of a community hospital with 1 transfer room the average proportion of patients waiting before entering the OR was 17.9 % ± 9.7 % with 3 ORs, 37.6 % ± 9.7 % with 5 ORs and 62.9 % ± 9.1 % with 8 ORs. The average waiting time of patients in the setting with 3 ORs was 3.1 ± 2.7 min, with 5 ORs 5.0 ± 5.8 min and with 8 ORs 11.5 ± 12.5 min. Based on this study the community hospital needs a second transfer room starting from 4 ORs so that there is no bottleneck for the subsequent OR processes. The average patient throughput in a setting with 4 ORs increased significantly by 0.3 patients per day when a second transfer room is available. The generic model showed a strong effect of the average duration of surgery on the number of transfer rooms needed. There was no linear correlation between the number of transfer rooms and the number of ORs. The shorter the average duration of surgery, the earlier an additional transfer room is required. Thus, hospitals with shorter duration of surgery and fewer ORs may need the same or more transfer rooms than a hospital with longer duration of surgery and more ORs. However, with respect to an economic analysis, the costs and benefits of installing additional OR transfer rooms need to be calculated using the profit margins of the specific hospital.
Development of an automated processing system for potential fishing zone forecast
NASA Astrophysics Data System (ADS)
Ardianto, R.; Setiawan, A.; Hidayat, J. J.; Zaky, A. R.
2017-01-01
The Institute for Marine Research and Observation (IMRO) - Ministry of Marine Affairs and Fisheries Republic of Indonesia (MMAF) has developed a potential fishing zone (PFZ) forecast using satellite data, called Peta Prakiraan Daerah Penangkapan Ikan (PPDPI). Since 2005, IMRO disseminates everyday PPDPI maps for fisheries marine ports and 3 days average for national areas. The accuracy in determining the PFZ and processing time of maps depend much on the experience of the operators creating them. This paper presents our research in developing an automated processing system for PPDPI in order to increase the accuracy and shorten processing time. PFZ are identified by combining MODIS sea surface temperature (SST) and chlorophyll-a (CHL) data in order to detect the presence of upwelling, thermal fronts and biological productivity enhancement, where the integration of these phenomena generally representing the PFZ. The whole process involves data download, map geo-process as well as layout that are carried out automatically by Python and ArcPy. The results showed that the automated processing system could be used to reduce the operator’s dependence on determining PFZ and speed up processing time.
Stochastic Kinetics on Networks: When Slow Is Fast
2015-01-01
Most chemical and biological processes can be viewed as reaction networks in which different pathways often compete kinetically for transformation of substrates into products. An enzymatic process is an example of such phenomena when biological catalysts create new routes for chemical reactions to proceed. It is typically assumed that the general process of product formation is governed by the pathway with the fastest kinetics at all time scales. In contrast to the expectation, here we show theoretically that at time scales sufficiently short, reactions are predominantly determined by the shortest pathway (in the number of intermediate states), regardless of the average turnover time associated with each pathway. This universal phenomenon is demonstrated by an explicit calculation for a system with two competing reversible (or irreversible) pathways. The time scales that characterize this regime and its relevance for single-molecule experimental studies are also discussed. PMID:25140607
NASA Astrophysics Data System (ADS)
Sun, M.; Yu, P. F.; Fu, J. X.; Ji, X. Q.; Jiang, T.
2017-08-01
The optimal process parameters and conditions for the treatment of slaughterhouse wastewater by coagulation sedimentation-AF - biological contact oxidation process were studied to solve the problem of high concentration organic wastewater treatment in the production of small and medium sized slaughter plants. The suitable water temperature and the optimum reaction time are determined by the experiment of precipitation to study the effect of filtration rate and reflux ratio on COD and SS in anaerobic biological filter and the effect of biofilm thickness and gas water ratio on NH3-N and COD in biological contact oxidation tank, and results show that the optimum temperature is 16-24°C, reaction time is 20 min in coagulating sedimentation, the optimum filtration rate is 0.6 m/h, and the optimum reflux ratio is 300% in anaerobic biological filter reactor. The most suitable biological film thickness range of 1.8-2.2 mm and the most suitable gas water ratio is 12:1-14:1 in biological contact oxidation pool. In the coupling process of continuous operation for 80 days, the average effluent’s mass concentrations of COD, TP and TN were 15.57 mg/L, 40 mg/L and 0.63 mg/L, the average removal rates were 98.93%, 86.10%, 88.95%, respectively. The coupling process has stable operation effect and good effluent quality, and is suitable for the industrial application.
Kapur, Ajay; Adair, Nilda; O'Brien, Mildred; Naparstek, Nikoleta; Cangelosi, Thomas; Zuvic, Petrina; Joseph, Sherin; Meier, Jason; Bloom, Beatrice; Potters, Louis
Modern external beam radiation therapy treatment delivery processes potentially increase the number of tasks to be performed by therapists and thus opportunities for errors, yet the need to treat a large number of patients daily requires a balanced allocation of time per treatment slot. The goal of this work was to streamline the underlying workflow in such time-interval constrained processes to enhance both execution efficiency and active safety surveillance using a Kaizen approach. A Kaizen project was initiated by mapping the workflow within each treatment slot for 3 Varian TrueBeam linear accelerators. More than 90 steps were identified, and average execution times for each were measured. The time-consuming steps were stratified into a 2 × 2 matrix arranged by potential workflow improvement versus the level of corrective effort required. A work plan was created to launch initiatives with high potential for workflow improvement but modest effort to implement. Time spent on safety surveillance and average durations of treatment slots were used to assess corresponding workflow improvements. Three initiatives were implemented to mitigate unnecessary therapist motion, overprocessing of data, and wait time for data transfer defects, respectively. A fourth initiative was implemented to make the division of labor by treating therapists as well as peer review more explicit. The average duration of treatment slots reduced by 6.7% in the 9 months following implementation of the initiatives (P = .001). A reduction of 21% in duration of treatment slots was observed on 1 of the machines (P < .001). Time spent on safety reviews remained the same (20% of the allocated interval), but the peer review component increased. The Kaizen approach has the potential to improve operational efficiency and safety with quick turnaround in radiation therapy practice by addressing non-value-adding steps characteristic of individual department workflows. Higher effort opportunities are identified to guide continual downstream quality improvements. Copyright © 2017 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.
Variable mass pendulum behaviour processed by wavelet analysis
NASA Astrophysics Data System (ADS)
Caccamo, M. T.; Magazù, S.
2017-01-01
The present work highlights how, in order to characterize the motion of a variable mass pendulum, wavelet analysis can be an effective tool in furnishing information on the time evolution of the oscillation spectral content. In particular, the wavelet transform is applied to process the motion of a hung funnel that loses fine sand at an exponential rate; it is shown how, in contrast to the Fourier transform which furnishes only an average frequency value for the motion, the wavelet approach makes it possible to perform a joint time-frequency analysis. The work is addressed at undergraduate and graduate students.
NASA Astrophysics Data System (ADS)
Tkáč, V.; Tarasenko, R.; Orendáčová, A.; Orendáč, M.; Sechovský, V.; Feher, A.
2018-05-01
The experimental and theoretical study of magnetocaloric effect and magnetic relaxation of the powder sample of CsGd(MoO4)2 were performed. The large conventional magnetocaloric effect was found around 2 K with - ΔSmax ≈ 26.5 J/(kg K) for B = 7 T. AC susceptibility measurement revealed multiple-time scale magnetic relaxation effects on different time scales. Slowest relaxation effect was attributed to the direct process with a bottleneck effect and two faster relaxation processes are effectively temperature independent, probably as a result of averaging in the powder sample.
Scheduling on the basis of the research of dependences among the construction process parameters
NASA Astrophysics Data System (ADS)
Romanovich, Marina; Ermakov, Alexander; Mukhamedzhanova, Olga
2017-10-01
The dependences among the construction process parameters are investigated in the article: average integrated value of qualification of the shift, number of workers per shift and average daily amount of completed work on the basis of correlation coefficient are considered. Basic data for the research of dependences among the above-stated parameters have been collected during the construction of two standard objects A and B (monolithic houses), in four months of construction (October, November, December, January). Kobb-Douglas production function has proved the values of coefficients of correlation close to 1. Function is simple to be used and is ideal for the description of the considered dependences. The development function, describing communication among the considered parameters of the construction process, is developed. The function of the development gives the chance to select optimum quantitative and qualitative (qualification) structure of the brigade link for the work during the next period of time, according to a preset value of amount of works. Function of the optimized amounts of works, which reflects interrelation of key parameters of construction process, is developed. Values of function of the optimized amounts of works should be used as the average standard for scheduling of the storming periods of construction.
Communication system with adaptive noise suppression
NASA Technical Reports Server (NTRS)
Kozel, David (Inventor); Devault, James A. (Inventor); Birr, Richard B. (Inventor)
2007-01-01
A signal-to-noise ratio dependent adaptive spectral subtraction process eliminates noise from noise-corrupted speech signals. The process first pre-emphasizes the frequency components of the input sound signal which contain the consonant information in human speech. Next, a signal-to-noise ratio is determined and a spectral subtraction proportion adjusted appropriately. After spectral subtraction, low amplitude signals can be squelched. A single microphone is used to obtain both the noise-corrupted speech and the average noise estimate. This is done by determining if the frame of data being sampled is a voiced or unvoiced frame. During unvoiced frames an estimate of the noise is obtained. A running average of the noise is used to approximate the expected value of the noise. Spectral subtraction may be performed on a composite noise-corrupted signal, or upon individual sub-bands of the noise-corrupted signal. Pre-averaging of the input signal's magnitude spectrum over multiple time frames may be performed to reduce musical noise.
Using a 10 year time-series dataset, we analyzed the effects of two severe droughts on water quality and ecosystem processes in a temperate, eutrophic estuary (Neuse River Estuary, North Carolina). During the droughts, dissolved inorganic nitrogen concentrations were on average 4...
Factors that affect the flow of patients through triage.
Lyons, Melinda; Brown, Ruth; Wears, Robert
2007-02-01
To use observational methods to objectively evaluate the organisation of triage and what issues may affect the effectiveness of the process. A two-phase study comprising observation of 16 h of triage in a London hospital emergency department and interviews with the triage staff to build a qualitative task analysis and study protocol for phase 2; observation and timing in triage for 1870 min including 257 patients and for 16 different members of the triage staff. No significant difference was found between grades of staff for the average triage time or the fraction of time absent from triage. In all, 67% of the time spent absent from triage was due to escorting patients into the department. The average time a patient waited in the reception before triage was 13 min 34 s; the average length of time to triage for a patient was 4 min 17 s. A significant increase in triage time was found when patients were triaged to a specialty, expected by a specialty, or were actively "seen and treated" in triage. Protocols to prioritise patients with potentially serious conditions to the front of the queue had a significantly positive effect on their waiting time. Supplementary tasks and distractions had varying effects on the timely assessment and triage of patients. The human factors method is applicable to the triage process and can identify key factors that affect the throughput at triage. Referring a patient to a specialty at triage affects significantly the triage workload; hence, alternative methods or management should be suggested. The decision to offer active treatment at triage increases the time taken, and should be based on clinical criteria and the workload determined by staffing levels. The proportion of time absent from triage could be markedly improved by support from porters or other non-qualified staff, as well as by proceduralised handovers from triage to the main clinical area. Triage productivity could be improved by all staff by becoming aware of the effect of the number of interruptions on the throughput of patients.
Method of detecting system function by measuring frequency response
Morrison, John L.; Morrison, William H.
2008-07-01
Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.
Method of Detecting System Function by Measuring Frequency Response
NASA Technical Reports Server (NTRS)
Morrison, John L. (Inventor); Morrison, William H. (Inventor)
2008-01-01
Real time battery impedance spectrum is acquired using one time record, Compensated Synchronous Detection (CSD). This parallel method enables battery diagnostics. The excitation current to a test battery is a sum of equal amplitude sin waves of a few frequencies spread over range of interest. The time profile of this signal has duration that is a few periods of the lowest frequency. The voltage response of the battery, average deleted, is the impedance of the battery in the time domain. Since the excitation frequencies are known, synchronous detection processes the time record and each component, both magnitude and phase, is obtained. For compensation, the components, except the one of interest, are reassembled in the time domain. The resulting signal is subtracted from the original signal and the component of interest is synchronously detected. This process is repeated for each component.
Amedee, Ronald G.; Maronge, Genevieve F.; Pinsky, William W.
2012-01-01
Background Patient transfers from other hospitals within the Ochsner Health System to the main campus are coordinated through a Transfer Center that was established in fall 2008. We analyzed the transfer process to assess distinct opportunities to enhance the overall transition of patient care. Methods We surveyed internal medicine residents and nocturnists to determine their satisfaction with transfers in terms of safety, efficiency, and usefulness of information provided at the time of transfer. After a kaizen event at which complementary goals for the institution and members of the study team were recognized and implemented, we resurveyed the group to evaluate improvement in the transfer process. Results The preintervention average satisfaction score was 1.18 (SD=0.46), while the postintervention score was 3.7 (SD=1.01). A t test showed a significant difference in the average scores between the preintervention and postintervention surveys (P<0.0001). Conclusions By including residents in the transfer calls (a result of the kaizen event), data were collected that facilitated fewer and higher quality handoffs that were performed in less time. In addition, the process resulted in increased awareness of the value of resident participation in institutional quality improvement projects. PMID:23267257
Amedee, Ronald G; Maronge, Genevieve F; Pinsky, William W
2012-01-01
Patient transfers from other hospitals within the Ochsner Health System to the main campus are coordinated through a Transfer Center that was established in fall 2008. We analyzed the transfer process to assess distinct opportunities to enhance the overall transition of patient care. We surveyed internal medicine residents and nocturnists to determine their satisfaction with transfers in terms of safety, efficiency, and usefulness of information provided at the time of transfer. After a kaizen event at which complementary goals for the institution and members of the study team were recognized and implemented, we resurveyed the group to evaluate improvement in the transfer process. The preintervention average satisfaction score was 1.18 (SD=0.46), while the postintervention score was 3.7 (SD=1.01). A t test showed a significant difference in the average scores between the preintervention and postintervention surveys (P<0.0001). By including residents in the transfer calls (a result of the kaizen event), data were collected that facilitated fewer and higher quality handoffs that were performed in less time. In addition, the process resulted in increased awareness of the value of resident participation in institutional quality improvement projects.
A theory of post-stall transients in axial compression systems. I - Development of equations
NASA Technical Reports Server (NTRS)
Moore, F. K.; Greitzer, E. M.
1985-01-01
An approximate theory is presented for post-stall transients in multistage axial compression systems. The theory leads to a set of three simultaneous nonlinear third-order partial differential equations for pressure rise, and average and disturbed values of flow coefficient, as functions of time and angle around the compressor. By a Galerkin procedure, angular dependence is averaged, and the equations become first order in time. These final equations are capable of describing the growth and possible decay of a rotating-stall cell during a compressor mass-flow transient. It is shown how rotating-stall-like and surgelike motions are coupled through these equations, and also how the instantaneous compressor pumping characteristic changes during the transient stall process.
Dominguez, Luis A.; Yildirim, Battalgazi; Husker, Allen L.; Cochran, Elizabeth S.; Christensen, Carl; Cruz-Atienza, Victor M.
2015-01-01
Each volunteer computer monitors ground motion and communicates using the Berkeley Open Infrastructure for Network Computing (BOINC, Anderson, 2004). Using a standard short‐term average, long‐term average (STLA) algorithm (Earle and Shearer, 1994; Cochran, Lawrence, Christensen, Chung, 2009; Cochran, Lawrence, Christensen, and Jakka, 2009), volunteer computer and sensor systems detect abrupt changes in the acceleration recordings. Each time a possible trigger signal is declared, a small package of information containing sensor and ground‐motion information is streamed to one of the QCN servers (Chung et al., 2011). Trigger signals, correlated in space and time, are then processed by the QCN server to look for potential earthquakes.
A cross-diffusion system derived from a Fokker-Planck equation with partial averaging
NASA Astrophysics Data System (ADS)
Jüngel, Ansgar; Zamponi, Nicola
2017-02-01
A cross-diffusion system for two components with a Laplacian structure is analyzed on the multi-dimensional torus. This system, which was recently suggested by P.-L. Lions, is formally derived from a Fokker-Planck equation for the probability density associated with a multi-dimensional Itō process, assuming that the diffusion coefficients depend on partial averages of the probability density with exponential weights. A main feature is that the diffusion matrix of the limiting cross-diffusion system is generally neither symmetric nor positive definite, but its structure allows for the use of entropy methods. The global-in-time existence of positive weak solutions is proved and, under a simplifying assumption, the large-time asymptotics is investigated.
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Dynamic Allocation of SPM Based on Time-Slotted Cache Conflict Graph for System Optimization
NASA Astrophysics Data System (ADS)
Wu, Jianping; Ling, Ming; Zhang, Yang; Mei, Chen; Wang, Huan
This paper proposes a novel dynamic Scratch-pad Memory allocation strategy to optimize the energy consumption of the memory sub-system. Firstly, the whole program execution process is sliced into several time slots according to the temporal dimension; thereafter, a Time-Slotted Cache Conflict Graph (TSCCG) is introduced to model the behavior of Data Cache (D-Cache) conflicts within each time slot. Then, Integer Nonlinear Programming (INP) is implemented, which can avoid time-consuming linearization process, to select the most profitable data pages. Virtual Memory System (VMS) is adopted to remap those data pages, which will cause severe Cache conflicts within a time slot, to SPM. In order to minimize the swapping overhead of dynamic SPM allocation, a novel SPM controller with a tightly coupled DMA is introduced to issue the swapping operations without CPU's intervention. Last but not the least, this paper discusses the fluctuation of system energy profit based on different MMU page size as well as the Time Slot duration quantitatively. According to our design space exploration, the proposed method can optimize all of the data segments, including global data, heap and stack data in general, and reduce the total energy consumption by 27.28% on average, up to 55.22% with a marginal performance promotion. And comparing to the conventional static CCG (Cache Conflicts Graph), our approach can obtain 24.7% energy profit on average, up to 30.5% with a sight boost in performance.
Seattle's minimum wage ordinance did not affect supermarket food prices by food processing category.
Spoden, Amanda L; Buszkiewicz, James H; Drewnowski, Adam; Long, Mark C; Otten, Jennifer J
2018-06-01
To examine the impacts of Seattle's minimum wage ordinance on food prices by food processing category. Supermarket food prices were collected for 106 items using a University of Washington Center for Public Health Nutrition market basket at affected and unaffected supermarket chain stores at three times: March 2015 (1-month pre-policy enactment), May 2015 (1-month post-policy enactment) and May 2016 (1-year post-policy enactment). Food items were categorized into four food processing groups, from minimally to ultra-processed. Data were analysed across time using a multilevel, linear difference-in-differences model at the store and price level stratified by level of food processing. Six large supermarket chain stores located in Seattle ('intervention') affected by the policy and six same-chain but unaffected stores in King County ('control'), Washington, USA. One hundred and six food and beverage items. The largest change in average price by food item was +$US 0·53 for 'processed foods' in King County between 1-month post-policy and 1-year post-policy enactment (P < 0·01). The smallest change was $US 0·00 for 'unprocessed or minimally processed foods' in Seattle between 1-month post-policy and 1-year post-policy enactment (P = 0·94). No significant changes in averaged chain prices were observed across food processing level strata in Seattle v. King County stores at 1-month or 1-year post-policy enactment. Supermarket food prices do not appear to be differentially impacted by Seattle's minimum wage ordinance by level of the food's processing. These results suggest that the early implementation of a city-level minimum wage policy does not alter supermarket food prices by level of food processing.
Improving a Dental School's Clinic Operations Using Lean Process Improvement.
Robinson, Fonda G; Cunningham, Larry L; Turner, Sharon P; Lindroth, John; Ray, Deborah; Khan, Talib; Yates, Audrey
2016-10-01
The term "lean production," also known as "Lean," describes a process of operations management pioneered at the Toyota Motor Company that contributed significantly to the success of the company. Although developed by Toyota, the Lean process has been implemented at many other organizations, including those in health care, and should be considered by dental schools in evaluating their clinical operations. Lean combines engineering principles with operations management and improvement tools to optimize business and operating processes. One of the core concepts is relentless elimination of waste (non-value-added components of a process). Another key concept is utilization of individuals closest to the actual work to analyze and improve the process. When the medical center of the University of Kentucky adopted the Lean process for improving clinical operations, members of the College of Dentistry trained in the process applied the techniques to improve inefficient operations at the Walk-In Dental Clinic. The purpose of this project was to reduce patients' average in-the-door-to-out-the-door time from over four hours to three hours within 90 days. Achievement of this goal was realized by streamlining patient flow and strategically relocating key phases of the process. This initiative resulted in patient benefits such as shortening average in-the-door-to-out-the-door time by over an hour, improving satisfaction by 21%, and reducing negative comments by 24%, as well as providing opportunity to implement the electronic health record, improving teamwork, and enhancing educational experiences for students. These benefits were achieved while maintaining high-quality patient care with zero adverse outcomes during and two years following the process improvement project.
NASA Astrophysics Data System (ADS)
Megawati, Jannah, Reni Ainun; Rahayuningtiyas, Indi
2017-01-01
This research studied the difference of white and blue silica gels when used as an adsorbent for ethanol purification that is processed via Adsorptive-Distillation (AD) at 1 atm pressure. The effect of process duration to purification process is also recorded and studied to evaluate the performance of designed AD equipment. The experiment was conducted using boiling flask covered with a heating mantle and the temperature was maintained at 78°C. The vapour flowed into the adsorbent column and was condensed using water as a cooling medium. The initial ethanol concentration was 90.8% v/v and volume was 300 mL. Experiment shows that designed AD equipment could be used to purify ethanol. The average vapour velocity was about 39.29 and 45.91 m/s for white and blue silica gels, respectively, which is considered very high. Therefore the saturated adsorption could not be obtained. Highest ethanol concentration achieved using white silica gel is about 96.671% v/v after 50 minutes. Thus AD with white silica gel showed good performance and passed azeotropic point. But AD with blue silica gel showed a different result, the adsorption of blue silica gel failed to break the azeotropic point. The outlet average water concentration for white and blue silica gels is 3.54 and 3.42 mole/L. Based on the weight ratio of adsorbed water per adsorbent, at 55th minutes of time; this ratio of blue silica gel is about 0.053 gwater/gads. The time required by the blue silica to achieve 0.5 wwater-adsorbed/wwater-initial is 45 minutes, and the average outlet water concentration is 3.42 mole/L. Meanwhile, the time required by a white silica to complete 0.5 wwater-adsorbed/wwater-initial is 35 minutes, and the average outlet water level is 3.54 mole/L. Based on the results, the blue silica as an adsorbent for AD of ethanol-water mixture is better than white silica gel.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Multiscale analysis of information dynamics for linear multivariate processes.
Faes, Luca; Montalto, Alessandro; Stramaglia, Sebastiano; Nollo, Giandomenico; Marinazzo, Daniele
2016-08-01
In the study of complex physical and physiological systems represented by multivariate time series, an issue of great interest is the description of the system dynamics over a range of different temporal scales. While information-theoretic approaches to the multiscale analysis of complex dynamics are being increasingly used, the theoretical properties of the applied measures are poorly understood. This study introduces for the first time a framework for the analytical computation of information dynamics for linear multivariate stochastic processes explored at different time scales. After showing that the multiscale processing of a vector autoregressive (VAR) process introduces a moving average (MA) component, we describe how to represent the resulting VARMA process using statespace (SS) models and how to exploit the SS model parameters to compute analytical measures of information storage and information transfer for the original and rescaled processes. The framework is then used to quantify multiscale information dynamics for simulated unidirectionally and bidirectionally coupled VAR processes, showing that rescaling may lead to insightful patterns of information storage and transfer but also to potentially misleading behaviors.
NASA Astrophysics Data System (ADS)
Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar
2017-10-01
This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.
Locomotion of microorganisms near a no-slip boundary in a viscoelastic fluid
NASA Astrophysics Data System (ADS)
Yazdi, Shahrzad; Ardekani, Arezoo M.; Borhan, Ali
2014-10-01
Locomotion of microorganisms plays a vital role in most of their biological processes. In many of these processes, microorganisms are exposed to complex fluids while swimming in confined domains, such as spermatozoa in mucus of mammalian reproduction tracts or bacteria in extracellular polymeric matrices during biofilm formation. Thus, it is important to understand the kinematics of propulsion in a viscoelastic fluid near a no-slip boundary. We use a squirmer model with a time-reversible body motion to analytically investigate the swimming kinematics in an Oldroyd-B fluid near a wall. Analysis of the time-averaged motion of the swimmer shows that both pullers and pushers in a viscoelastic fluid swim towards the no-slip boundary if they are initially located within a small domain of "attraction" in the vicinity of the wall. In contrast, neutral swimmers always move towards the wall regardless of their initial distance from the wall. Outside the domain of attraction, pullers and pushers are both repelled from the no-slip boundary. Time-averaged locomotion is most pronounced at a Deborah number of unity. We examine the swimming trajectories of different types of swimmers as a function of their initial orientation and distance from the no-slip boundary.
Riera, Amalis; Ford, John K; Ross Chapman, N
2013-09-01
Killer whales in British Columbia are at risk, and little is known about their winter distribution. Passive acoustic monitoring of their year-round habitat is a valuable supplemental method to traditional visual and photographic surveys. However, long-term acoustic studies of odontocetes have some limitations, including the generation of large amounts of data that require highly time-consuming processing. There is a need to develop tools and protocols to maximize the efficiency of such studies. Here, two types of analysis, real-time and long term spectral averages, were compared to assess their performance at detecting killer whale calls in long-term acoustic recordings. In addition, two different duty cycles, 1/3 and 2/3, were tested. Both the use of long term spectral averages and a lower duty cycle resulted in a decrease in call detection and positive pod identification, leading to underestimations of the amount of time the whales were present. The impact of these limitations should be considered in future killer whale acoustic surveys. A compromise between a lower resolution data processing method and a higher duty cycle is suggested for maximum methodological efficiency.
A method of estimating in-stream residence time of water in rivers
NASA Astrophysics Data System (ADS)
Worrall, F.; Howden, N. J. K.; Burt, T. P.
2014-05-01
This study develops a method for estimating the average in-stream residence time of water in a river channel and across large catchments, i.e. the time between water entering a river and reaching a downstream monitoring point. The methodology uses river flow gauging data to integrate Manning's equation along a length of channel for different percentile flows. The method was developed and tested for the River Tees in northern England and then applied across the United Kingdom (UK). The study developed methods to predict channel width and main channel length from catchment area. For an 818 km2 catchment with a channel length of 79 km, the in-stream residence time at the 50% exceedence flow was 13.8 h. The method was applied to nine UK river basins and the results showed that in-stream residence time was related to the average slope of a basin and its average annual rainfall. For the UK as a whole, the discharge-weighted in-stream residence time was 26.7 h for the median flow. At median flow, 50% of the discharge-weighted in-stream residence time was due to only 6 out of the 323 catchments considered. Since only a few large rivers dominate the in-stream residence time, these rivers will dominate key biogeochemical processes controlling export at the national scale. The implications of the results for biogeochemistry, especially the turnover of carbon in rivers, are discussed.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
Tian, Chao; Wang, Lixin; Novick, Kimberly A
2016-10-15
High-precision analysis of atmospheric water vapor isotope compositions, especially δ(17) O values, can be used to improve our understanding of multiple hydrological and meteorological processes (e.g., differentiate equilibrium or kinetic fractionation). This study focused on assessing, for the first time, how the accuracy and precision of vapor δ(17) O laser spectroscopy measurements depend on vapor concentration, delta range, and averaging-time. A Triple Water Vapor Isotope Analyzer (T-WVIA) was used to evaluate the accuracy and precision of δ(2) H, δ(18) O and δ(17) O measurements. The sensitivity of accuracy and precision to water vapor concentration was evaluated using two international standards (GISP and SLAP2). The sensitivity of precision to delta value was evaluated using four working standards spanning a large delta range. The sensitivity of precision to averaging-time was assessed by measuring one standard continuously for 24 hours. Overall, the accuracy and precision of the δ(2) H, δ(18) O and δ(17) O measurements were high. Across all vapor concentrations, the accuracy of δ(2) H, δ(18) O and δ(17) O observations ranged from 0.10‰ to 1.84‰, 0.08‰ to 0.86‰ and 0.06‰ to 0.62‰, respectively, and the precision ranged from 0.099‰ to 0.430‰, 0.009‰ to 0.080‰ and 0.022‰ to 0.054‰, respectively. The accuracy and precision of all isotope measurements were sensitive to concentration, with the higher accuracy and precision generally observed under moderate vapor concentrations (i.e., 10000-15000 ppm) for all isotopes. The precision was also sensitive to the range of delta values, although the effect was not as large compared with the sensitivity to concentration. The precision was much less sensitive to averaging-time than the concentration and delta range effects. The accuracy and precision performance of the T-WVIA depend on concentration but depend less on the delta value and averaging-time. The instrument can simultaneously and continuously measure δ(2) H, δ(18) O and δ(17) O values in water vapor, opening a new window to better understand ecological, hydrological and meteorological processes. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
2013-01-01
Background Statistical process control (SPC), an industrial sphere initiative, has recently been applied in health care and public health surveillance. SPC methods assume independent observations and process autocorrelation has been associated with increase in false alarm frequency. Methods Monthly mean raw mortality (at hospital discharge) time series, 1995–2009, at the individual Intensive Care unit (ICU) level, were generated from the Australia and New Zealand Intensive Care Society adult patient database. Evidence for series (i) autocorrelation and seasonality was demonstrated using (partial)-autocorrelation ((P)ACF) function displays and classical series decomposition and (ii) “in-control” status was sought using risk-adjusted (RA) exponentially weighted moving average (EWMA) control limits (3 sigma). Risk adjustment was achieved using a random coefficient (intercept as ICU site and slope as APACHE III score) logistic regression model, generating an expected mortality series. Application of time-series to an exemplar complete ICU series (1995-(end)2009) was via Box-Jenkins methodology: autoregressive moving average (ARMA) and (G)ARCH ((Generalised) Autoregressive Conditional Heteroscedasticity) models, the latter addressing volatility of the series variance. Results The overall data set, 1995-2009, consisted of 491324 records from 137 ICU sites; average raw mortality was 14.07%; average(SD) raw and expected mortalities ranged from 0.012(0.113) and 0.013(0.045) to 0.296(0.457) and 0.278(0.247) respectively. For the raw mortality series: 71 sites had continuous data for assessment up to or beyond lag40 and 35% had autocorrelation through to lag40; and of 36 sites with continuous data for ≥ 72 months, all demonstrated marked seasonality. Similar numbers and percentages were seen with the expected series. Out-of-control signalling was evident for the raw mortality series with respect to RA-EWMA control limits; a seasonal ARMA model, with GARCH effects, displayed white-noise residuals which were in-control with respect to EWMA control limits and one-step prediction error limits (3SE). The expected series was modelled with a multiplicative seasonal autoregressive model. Conclusions The data generating process of monthly raw mortality series at the ICU level displayed autocorrelation, seasonality and volatility. False-positive signalling of the raw mortality series was evident with respect to RA-EWMA control limits. A time series approach using residual control charts resolved these issues. PMID:23705957
Nanotexturing of High-Performance Woven Fabrics for Novel Composite Applications
2006-11-29
biocompatibility , and adhesion of dyes. At the same time the glow discharge assists in the removal of a weak boundary layer (WBL) residing on the... polyimides , polyetherimides, carbon fibers, silk, cellulose, wool, cotton, linen, etc… PLASMA ON Average process speed: 15 ft/min web width: >20 inches
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2011 CFR
2011-07-01
... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2014 CFR
2014-07-01
... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2010 CFR
2010-07-01
... appropriate, monitor malfunctions, associated repairs, and required quality assurance or control activities... monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring malfunctions, associated repairs...
40 CFR 63.2270 - How do I monitor and collect data to demonstrate continuous compliance?
Code of Federal Regulations, 2013 CFR
2013-07-01
... control activities (including, as applicable, calibration checks and required zero and span adjustments), you must conduct all monitoring in continuous operation at all times that the process unit is operating. For purposes of calculating data averages, you must not use data recorded during monitoring...
Use of Anthropogenic Radioisotopes to Estimate Rates of Soil Redistribution by Wind
USDA-ARS?s Scientific Manuscript database
Wind erosion results in soil degradation and fugitive dust emissions. The temporal and spatial variability of aeolian processes makes local estimates of long-term average erosion costly and time consuming. Atmospheric testing of nuclear weapons during the 1950s and 1960s resulted in previously non...
Gould, J L
1980-02-01
In both their navigation and dance communication, bees are able to compensate for the sun's movement. When foragers are prevented from seeing the sun for 2 hours, they compensate by extrapolation, using the sun's rate of movement when last observed. These and other data suggest a time-averaging processing strategy in honey bee orientation.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-18
... Office of Management and Budget, Office of Information and Regulatory Affairs, Attention Department of... 3,000 enrollments per year. The average response time for reading the directions for the Federal... = 100 hours. The entire process of reading the letter and completing both forms would take 15 minutes...
Yue, Xiu; Yu, Guangping; Liu, Zhuhan; Tang, Jiali; Liu, Jian
2018-04-01
The long start-up time of the completely autotrophic nitrogen removal over nitrite (CANON) process is one of the main disadvantages of this system. In this paper, the CANON process with a submerged aerated biological filter (SABF) was rapidly started up within 26 days. It gave an average ammonium nitrogen removal rate (ANR) and a total nitrogen removal rate (TNR) of 94.2% and 81.3%, respectively. The phyla Proteobacteria and Planctomycetes were confirmed as the ammonia oxidizing bacteria (AOB) and anaerobic ammonium oxidation bacteria (AnAOB). The genus Candidatus Brocadia was the major contributor of nitrogen removal. pH and temperature affect the performance of the CANON process. This experimental results showed that the optimum pH and temperature were 8.0 and 30 °C, respectively, which gave the highest average ANR and TNR values of 94.6% and 85.1%, respectively. This research could promote the nitrogen removal ability of CANON process in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ros, M; Franke-Whittle, I H; Morales, A B; Insam, H; Ayuso, M; Pascual, J A
2013-05-01
This study evaluated the feasibility of obtaining methane in anaerobic digestion (AD) from the waste products generated by the processing of fruit and vegetables. During the first phase (0-55 d) of the AD using sludge from fruit and vegetable processing, an average value of 244±88 L kg(-1) dry matter d(-1)of biogas production was obtained, and methane content reached 65% of the biogas. Co-digestion with chopped fresh artichoke wastes in a second phase (55-71 d) enhanced biogas production, and resulted in an average value of 354±68 L kg(-1) dry matter d(-1), with higher methane content (more than 70%). The archaeal community involved in methane production was studied using the ANAEROCHIP microarray and real-time PCR. Results indicated that species of Methanosaeta and Methanosarcina were important during the AD process. Methanosarcina numbers increased after the addition of chopped fresh artichoke, while Methanosaeta numbers decreased. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rapid diagnosis of pulmonary tuberculosis
Sarmiento, José Mauricio Hernández; Restrepo, Natalia Builes; Mejía, Gloria Isabel; Zapata, Elsa; Restrepo, Mary Alejandra; Robledo, Jaime
2014-01-01
Introduction World Health Organization had estimated 9.4 million tuberculosis cases on 2009, with 1.7 million of deaths as consequence of treatment and diagnosis failures. Improving diagnostic methods for the rapid and timely detection of tuberculosis patients is critical to control the disease. The aim of this study was evaluating the accuracy of the cord factor detection on the solid medium Middlebrook 7H11 thin layer agar compared to the Lowenstein Jensen medium for the rapid tuberculosis diagnosis. Methods Patients with suspected tuberculosis were enrolled and their sputum samples were processed for direct smear and culture on Lowenstein Jensen and BACTEC MGIT 960, from which positive tubes were subcultured on Middlebrook 7H11 thin layer agar. Statistical analysis was performed comparing culture results from Lowenstein Jensen and the thin layer agar, and their corresponding average times for detecting Mycobacterium tuberculosis. The performance of cord factor detection was evaluated determining its sensitivity, specificity, positive and negative predictive value. Results 111 out of 260 patients were positive for M. tuberculosis by Lowenstein Jensen medium with an average time ± standard deviation for its detection of 22.3 ± 8.5 days. 115 patients were positive by the MGIT system identifying the cord factor by the Middlebrook 7H11 thin layer agar which average time ± standard deviation was 5.5 ± 2.6 days. Conclusion The cord factor detection by Middlebrook 7H11 thin layer agar allows early and accurate tuberculosis diagnosis during an average time of 5 days, making this rapid diagnosis particularly important in patients with negative sputum smear. PMID:25419279
Derivation of Poisson and Nernst-Planck equations in a bath and channel from a molecular model.
Schuss, Z; Nadler, B; Eisenberg, R S
2001-09-01
Permeation of ions from one electrolytic solution to another, through a protein channel, is a biological process of considerable importance. Permeation occurs on a time scale of micro- to milliseconds, far longer than the femtosecond time scales of atomic motion. Direct simulations of atomic dynamics are not yet possible for such long-time scales; thus, averaging is unavoidable. The question is what and how to average. In this paper, we average a Langevin model of ionic motion in a bulk solution and protein channel. The main result is a coupled system of averaged Poisson and Nernst-Planck equations (CPNP) involving conditional and unconditional charge densities and conditional potentials. The resulting NP equations contain the averaged force on a single ion, which is the sum of two components. The first component is the gradient of a conditional electric potential that is the solution of Poisson's equation with conditional and permanent charge densities and boundary conditions of the applied voltage. The second component is the self-induced force on an ion due to surface charges induced only by that ion at dielectric interfaces. The ion induces surface polarization charge that exerts a significant force on the ion itself, not present in earlier PNP equations. The proposed CPNP system is not complete, however, because the electric potential satisfies Poisson's equation with conditional charge densities, conditioned on the location of an ion, while the NP equations contain unconditional densities. The conditional densities are closely related to the well-studied pair-correlation functions of equilibrium statistical mechanics. We examine a specific closure relation, which on the one hand replaces the conditional charge densities by the unconditional ones in the Poisson equation, and on the other hand replaces the self-induced force in the NP equation by an effective self-induced force. This effective self-induced force is nearly zero in the baths but is approximately equal to the self-induced force in and near the channel. The charge densities in the NP equations are interpreted as time averages over long times of the motion of a quasiparticle that diffuses with the same diffusion coefficient as that of a real ion, but is driven by the averaged force. In this way, continuum equations with averaged charge densities and mean-fields can be used to describe permeation through a protein channel.
Blessing, William; Ootsuka, Youichirou
2016-01-01
Charles Darwin noted that natural selection applies even to the hourly organization of daily life. Indeed, in many species, the day is segmented into active periods when the animal searches for food, and inactive periods when the animal digests and rests. This episodic temporal patterning is conventionally referred to as ultradian (<24 hours) rhythmicity. The average time between ultradian events is approximately 1-2 hours, but the interval is highly variable. The ultradian pattern is stochastic, jaggy rather than smooth, so that although the next event is likely to occur within 1-2 hours, it is not possible to predict the precise timing. When models of circadian timing are applied to the ultradian temporal pattern, the underlying assumption of true periodicity (stationarity) has distorted the analyses, so that the ultradian pattern is frequently averaged away and ignored. Each active ultradian episode commences with an increase in hippocampal theta rhythm, indicating the switch of attention to the external environment. During each active episode, behavioral and physiological processes, including changes in body and brain temperature, occur in an integrated temporal order, confirming organization by programs endogenous to the central nervous system. We describe methods for analyzing episodic ultradian events, including the use of wavelet mathematics to determine their timing and amplitude, and the use of fractal-based procedures to determine their complexity.
Blessing, William; Ootsuka, Youichirou
2016-01-01
ABSTRACT Charles Darwin noted that natural selection applies even to the hourly organization of daily life. Indeed, in many species, the day is segmented into active periods when the animal searches for food, and inactive periods when the animal digests and rests. This episodic temporal patterning is conventionally referred to as ultradian (<24 hours) rhythmicity. The average time between ultradian events is approximately 1–2 hours, but the interval is highly variable. The ultradian pattern is stochastic, jaggy rather than smooth, so that although the next event is likely to occur within 1–2 hours, it is not possible to predict the precise timing. When models of circadian timing are applied to the ultradian temporal pattern, the underlying assumption of true periodicity (stationarity) has distorted the analyses, so that the ultradian pattern is frequently averaged away and ignored. Each active ultradian episode commences with an increase in hippocampal theta rhythm, indicating the switch of attention to the external environment. During each active episode, behavioral and physiological processes, including changes in body and brain temperature, occur in an integrated temporal order, confirming organization by programs endogenous to the central nervous system. We describe methods for analyzing episodic ultradian events, including the use of wavelet mathematics to determine their timing and amplitude, and the use of fractal-based procedures to determine their complexity. PMID:28349079
The association between childhood obesity and tooth eruption.
Must, Aviva; Phillips, Sarah M; Tybor, David J; Lividini, Keith; Hayes, Catherine
2012-10-01
Obesity is a growth-promoting process as evidenced by its effect on the timing of puberty. Although studies are limited, obesity has been shown to affect the timing of tooth eruption. Both the timing and sequence of tooth eruption are important to overall oral health. The purpose of this study was to examine the association between obesity and tooth eruption. Data were combined from three consecutive cycles (2001-2006) of the National Health and Nutrition Examination Survey (NHANES) and analyzed to examine associations between the number of teeth erupted (NET) and obesity status (BMI z-score >95th percentile BMI relative to the Centers for Disease Control and Prevention (CDC) growth reference) among children 5 up to 14 years of age, controlling for potential confounding by age, gender, race, and socioeconomic status (SES). Obesity is significantly associated with having a higher average NET during the mixed dentition period. On average, teeth of obese children erupted earlier than nonobese children with obese children having on average 1.44 more teeth erupted than nonobese children, after adjusting for age, gender, and race/ethnicity (P < 0.0001). SES was not a confounder of the observed associations. Obese children, on average, have significantly more teeth erupted than nonobese children after adjusting for gender, age, and race. These findings may have clinical importance in the area of dental and orthodontic medicine both in terms of risk for dental caries due to extended length of time exposed in the oral cavity and sequencing which may increase the likelihood of malocclusions.
A Geomagnetic Estimate of Mean Paleointensity
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
2004-01-01
To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate used the modern magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that low degree multi-pole powers of the coresource field are distributed as chi-squared with 2n+1 degrees of freedom and expectation values, where c is the 3480 km radius of the Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity F(exp 2). The sum also estimates F(exp 2) averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes. Additional information is included in the original extended abstract.
Motor ability and inhibitory processes in children with ADHD: a neuroelectric study.
Hung, Chiao-Ling; Chang, Yu-Kai; Chan, Yuan-Shuo; Shih, Chia-Hao; Huang, Chung-Ju; Hung, Tsung-Min
2013-06-01
The purpose of the current study was to examine the relationship between motor ability and response inhibition using behavioral and electrophysiological indices in children with ADHD. A total of 32 participants were recruited and underwent a motor ability assessment by administering the Basic Motor Ability Test-Revised (BMAT) as well as the Go/No-Go task and event-related potential (ERP) measurements at the same time. The results indicated that the BMAT scores were positively associated with the behavioral and ERP measures. Specifically, the BMAT average score was associated with a faster reaction time and higher accuracy, whereas higher BMAT subset scores predicted a shorter P3 latency in the Go condition. Although the association between the BMAT average score and the No-Go accuracy was limited, higher BMAT average and subset scores predicted a shorter N2 and P3 latency and a larger P3 amplitude in the No-Go condition. These findings suggest that motor abilities may play roles that benefit the cognitive performance of ADHD children.
TEMPORAL EVOLUTION AND SPATIAL DISTRIBUTION OF WHITE-LIGHT FLARE KERNELS IN A SOLAR FLARE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawate, T.; Ishii, T. T.; Nakatani, Y.
2016-12-10
On 2011 September 6, we observed an X2.1-class flare in continuum and H α with a frame rate of about 30 Hz. After processing images of the event by using a speckle-masking image reconstruction, we identified white-light (WL) flare ribbons on opposite sides of the magnetic neutral line. We derive the light curve decay times of the WL flare kernels at each resolution element by assuming that the kernels consist of one or two components that decay exponentially, starting from the peak time. As a result, 42% of the pixels have two decay-time components with average decay times of 15.6 andmore » 587 s, whereas the average decay time is 254 s for WL kernels with only one decay-time component. The peak intensities of the shorter decay-time component exhibit good spatial correlation with the WL intensity, whereas the peak intensities of the long decay-time components tend to be larger in the early phase of the flare at the inner part of the flare ribbons, close to the magnetic neutral line. The average intensity of the longer decay-time components is 1.78 times higher than that of the shorter decay-time components. If the shorter decay time is determined by either the chromospheric cooling time or the nonthermal ionization timescale and the longer decay time is attributed to the coronal cooling time, this result suggests that WL sources from both regions appear in 42% of the WL kernels and that WL emission of the coronal origin is sometimes stronger than that of chromospheric origin.« less
Lin, Chitsan; Liou, Naiwei; Chang, Pao-Erh; Yang, Jen-Chin; Sun, Endy
2007-04-01
Although most coke oven research is focused on the emission of polycyclic aromatic hydrocarbons, well-known carcinogens, little has been done on the emission of volatile organic compounds, some of which are also thought to be hazardous to workers and the environment. To profile coke oven gas (COG) emissions, we set up an open-path Fourier transform infrared (OP-FTIR) system on top of a battery of coke ovens at a steel mill located in Southern Taiwan and monitored average emissions in a coke processing area for 16.5 hr. Nine COGs were identified, including ammonia, CO, methane, ethane, ethylene, acetylene, propylene, cyclohexane, and O-xylene. Time series plots indicated that the type of pollutants differed over time, suggesting that different emission sources (e.g., coke pushing, quench tower, etc.) were involved at different times over the study period. This observation was confirmed by the low cross-correlation coefficients of the COGs. It was also found that, with the help of meteorological analysis, the data collected by the OP-FTIR system could be analyzed effectively to characterize differences in the location of sources. Although the traditional single-point samplings of emissions involves sampling various sources in a coke processing area at several different times and is a credible profiling of emissions, our findings strongly suggest that they are not nearly as efficient or as cost-effective as the continuous line average method used in this study. This method would make it easier and cheaper for engineers and health risk assessors to identify and to control fugitive volatile organic compound emissions and to improve environmental health.
NASA Astrophysics Data System (ADS)
Li, Qingchen; Cao, Guangxi; Xu, Wei
2018-01-01
Based on a multifractal detrending moving average algorithm (MFDMA), this study uses the fractionally autoregressive integrated moving average process (ARFIMA) to demonstrate the effectiveness of MFDMA in the detection of auto-correlation at different sample lengths and to simulate some artificial time series with the same length as the actual sample interval. We analyze the effect of predictable and unpredictable meteorological disasters on the US and Chinese stock markets and the degree of long memory in different sectors. Furthermore, we conduct a preliminary investigation to determine whether the fluctuations of financial markets caused by meteorological disasters are derived from the normal evolution of the financial system itself or not. We also propose several reasonable recommendations.
Photophysical Behaviors of Single Fluorophores Localized on Zinc Oxide Nanostructures
Fu, Yi; Zhang, Jian; Lakowicz, Joseph R.
2012-01-01
Single-molecule fluorescence spectroscopy has now been widely used to investigate complex dynamic processes which would normally be obscured in an ensemble-averaged measurement. In this report we studied photophysical behaviors of single fluorophores in proximity to zinc oxide nanostructures by single-molecule fluorescence spectroscopy and time-correlated single-photon counting (TCSPC). Single fluorophores on ZnO surfaces showed enhanced fluorescence brightness to various extents compared with those on glass; the single-molecule time trajectories also illustrated pronounced fluctuations of emission intensities, with time periods distributed from milliseconds to seconds. We attribute fluorescence fluctuations to the interfacial electron transfer (ET) events. The fluorescence fluctuation dynamics were found to be inhomogeneous from molecule to molecule and from time to time, showing significant static and dynamic disorders in the interfacial electron transfer reaction processes. PMID:23109903
Load balancing for massively-parallel soft-real-time systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hailperin, M.
1988-09-01
Global load balancing, if practical, would allow the effective use of massively-parallel ensemble architectures for large soft-real-problems. The challenge is to replace quick global communications, which is impractical in a massively-parallel system, with statistical techniques. In this vein, the author proposes a novel approach to decentralized load balancing based on statistical time-series analysis. Each site estimates the system-wide average load using information about past loads of individual sites and attempts to equal that average. This estimation process is practical because the soft-real-time systems of interest naturally exhibit loads that are periodic, in a statistical sense akin to seasonality in econometrics.more » It is shown how this load-characterization technique can be the foundation for a load-balancing system in an architecture employing cut-through routing and an efficient multicast protocol.« less
Investigation on the structural characterization of pulsed p-type porous silicon
NASA Astrophysics Data System (ADS)
Wahab, N. H. Abd; Rahim, A. F. Abd; Mahmood, A.; Yusof, Y.
2017-08-01
P-type Porous silicon (PS) was sucessfully formed by using an electrochemical pulse etching (PC) and conventional direct current (DC) etching techniques. The PS was etched in the Hydrofluoric (HF) based solution at a current density of J = 10 mA/cm2 for 30 minutes from a crystalline silicon wafer with (100) orientation. For the PC process, the current was supplied through a pulse generator with 14 ms cycle time (T) with 10 ms on time (Ton) and pause time (Toff) of 4 ms respectively. FESEM, EDX, AFM, and XRD have been used to characterize the morphological properties of the PS. FESEM images showed that pulse PS (PPC) sample produces more uniform circular structures with estimated average pore sizes of 42.14 nm compared to DC porous (PDC) sample with estimated average size of 16.37nm respectively. The EDX spectrum for both samples showed higher Si content with minimal presence of oxide.
Ultrafast pulse lasers jump to macro applications
NASA Astrophysics Data System (ADS)
Griebel, Martin; Lutze, Walter; Scheller, Torsten
2016-03-01
Ultrafast Lasers have been proven for several micro applications, e.g. stent cutting, for many years. Within its development of applications Jenoptik has started to use ultrafast lasers in macro applications in the automotive industry. The JenLas D2.fs-lasers with power output control via AOM is an ideal tool for closed loop controlled material processing. Jenoptik enhanced his well established sensor controlled laser weakening process for airbag covers to a new level. The patented process enables new materials using this kind of technology. One of the most sensitive cover materials is genuine leather. As a natural product it is extremely inhomogeneous and sensitive for any type of thermal load. The combination of femtosecond pulse ablation and closed loop control by multiple sensor array opens the door to a new quality level of defined weakening. Due to the fact, that the beam is directed by scanning equipment the process can be split in multiple cycles additionally reducing the local energy input. The development used the 5W model as well as the latest 10W release of JenLas D2.fs and achieved amazing processing speeds which directly fulfilled the requirements of the automotive industry. Having in mind that the average cycle time of automotive processes is about 60s, trials had been done of processing weakening lines in genuine leather of 1.2mm thickness. Parameters had been about 15 cycles with 300mm/s respectively resulting in an average speed of 20mm/s and a cycle time even below 60s. First samples had already given into functional and aging tests and passed successfully.
Operational Failures and Interruptions in Hospital Nursing
Tucker, Anita L; Spear, Steven J
2006-01-01
Objective To describe the work environment of hospital nurses with particular focus on the performance of work systems supplying information, materials, and equipment for patient care. Data Sources Primary observation, semistructured interviews, and surveys of hospital nurses. Study Design We sampled a cross-sectional group of six U.S. hospitals to examine the frequency of work system failures and their impact on nurse productivity. Data Collection We collected minute-by-minute data on the activities of 11 nurses. In addition, we conducted interviews with six of these nurses using questions related to obstacles to care. Finally, we created and administered two surveys in 48 nursing units, one for nurses and one for managers, asking about the frequency of specific work system failures. Principal Findings Nurses we observed experienced an average of 8.4 work system failures per 8-hour shift. The five most frequent types of failures, accounting for 6.4 of these obstacles, involved medications, orders, supplies, staffing, and equipment. Survey questions asking nurses how frequently they experienced these five categories of obstacles yielded similar frequencies. For an average 8-hour shift, the average task time was only 3.1 minutes, and in spite of this, nurses were interrupted mid-task an average of eight times per shift. Conclusions Our findings suggest that nurse effectiveness can be increased by creating improvement processes triggered by the occurrence of work system failures, with the goal of reducing future occurrences. Second, given that nursing work is fragmented and unpredictable, designing processes that are robust to interruption can help prevent errors. PMID:16704505
Effectiveness of bone cleaning process using chemical and entomology approaches: time and cost.
Lai, Poh Soon; Khoo, Lay See; Mohd Hilmi, Saidin; Ahmad Hafizam, Hasmi; Mohd Shah, Mahmood; Nurliza, Abdullah; Nazni, Wasi Ahmad
2015-08-01
Skeletal examination is an important aspect of forensic pathology practice, requiring effective bone cleaning with minimal artefact. This study was conducted to compare between chemical and entomology methods of bone cleaning. Ten subjects between 20 and 40 years old who underwent uncomplicated medico-legal autopsies at the Institute of Forensic Medicine Malaysia were randomly chosen for this descriptive cross sectional study. The sternum bone was divided into 4 parts, each part subjected to a different cleaning method, being two chemical approaches i.e. laundry detergent and a combination of 6% hydrogen peroxide and powder sodium bicarbonate and two entomology approaches using 2nd instar maggots of Chrysomyia rufifacies and Ophyra spinigera. A scoring system for grading the outcome of cleaning was used. The effectiveness of the methods was evaluated based on average weight reduction per day and median number of days to achieve the average score of less than 1.5 within 12 days of the bone cleaning process. Using maggots was the most time-effective and costeffective method, achieving an average weight reduction of 1.4 gm per day, a median of 11.3 days to achieve the desired score and an average cost of MYR 4.10 per case to reach the desired score within 12 days. This conclusion was supported by blind validation by forensic specialists achieving a 77.8% preference for maggots. Emission scanning electron microscopy evaluation also revealed that maggots especially Chrysomyia rufifacies preserved the original condition of the bones better allowing improved elucidation of bone injuries in future real cases.
Yang, Muer; Fry, Michael J; Raikhelkar, Jayashree; Chin, Cynthia; Anyanwu, Anelechi; Brand, Jordan; Scurlock, Corey
2013-02-01
To develop queuing and simulation-based models to understand the relationship between ICU bed availability and operating room schedule to maximize the use of critical care resources and minimize case cancellation while providing equity to patients and surgeons. Retrospective analysis of 6-month unit admission data from a cohort of cardiothoracic surgical patients, to create queuing and simulation-based models of ICU bed flow. Three different admission policies (current admission policy, shortest-processing-time policy, and a dynamic policy) were then analyzed using simulation models, representing 10 yr worth of potential admissions. Important output data consisted of the "average waiting time," a proxy for unit efficiency, and the "maximum waiting time," a surrogate for patient equity. A cardiothoracic surgical ICU in a tertiary center in New York, NY. Six hundred thirty consecutive cardiothoracic surgical patients admitted to the cardiothoracic surgical ICU. None. Although the shortest-processing-time admission policy performs best in terms of unit efficiency (0.4612 days), it did so at expense of patient equity prolonging surgical waiting time by as much as 21 days. The current policy gives the greatest equity but causes inefficiency in unit bed-flow (0.5033 days). The dynamic policy performs at a level (0.4997 days) 8.3% below that of the shortest-processing-time in average waiting time; however, it balances this with greater patient equity (maximum waiting time could be shortened by 4 days compared to the current policy). Queuing theory and computer simulation can be used to model case flow through a cardiothoracic operating room and ICU. A dynamic admission policy that looks at current waiting time and expected ICU length of stay allows for increased equity between patients with only minimum losses of efficiency. This dynamic admission policy would seem to be a superior in maximizing case-flow. These results may be generalized to other surgical ICUs.
Bedrock river erosion measurements and modelling along a river of the Frontal Himalaya
NASA Astrophysics Data System (ADS)
Lave, Jerome; Dubille, Matthieu
2017-04-01
River incision is a key process in mountains denudation and therefore in landscape evolution models. Despite its importance, most incision models for mountain rivers rely on simplified, or quite empirical relations, and generally only consider annual average values for water discharge and sediment flux. In contrast, very few studies consider mechanistic models at the timescale of a flood, and try to bridge the gap between experimental or theoretical approaches and long term river incision studies. In this contribution, we present observations made during 7 monsoon seasons on fluvial bedrock erosion along the Bakeya river across the Frontal Himalaya in Central Nepal. Along its lower gorge, this river incises alternation of indurated sandstone and less resistant claystone, at Holocene rates larger than 10mm/yr. More importantly, its upper drainage mostly drains through non-cohesive conglomerate which allows, in this specific setting, estimating the bedload characteristics and instantaneous fluxes, i.e. a pre-requisite to test mechanistic models of fluvial erosion. During the study period, we monitored and documented the channel bank erosion in order to understand the amplitude of the erosion processes, their occurrence in relation with hydrology, in order to test time-integrated models of erosion. Besides hydrologic monitoring, erosion measurements were threefold: (1) at the scale of the whole monsoon, plucking and block removal by repeated photo surveys of a 400m long channel reach, (2) detailed microtopographic surveys of channel bedrock elevation along a few sandstone bars to document their abrasion, (3) real time measurement of fluvial bedrock wear to document erosion timing using a new erosion sensor. Results indicate that: 1. Erosion is highly dependent on rock resistance, but on average block detachment and removal is a more efficient process than bedrock attrition, and operates at a rate that permit channel banks downcutting to keep pace with Holocene uplift rate. 2. Both block detachment and attrition processes clearly increase with fluvial shear stress, but non-linearly, in particular through the existence of a minimum threshold. As a result of which bank erosion occur during only a few hours per year during short and very high flood events, which questions the use of average discharge (or drainage area) in many bedrock erosion models. We then propose a semi-physical model of sandstone bars abrasion based on discharge history (HEC-RAS modelling), Rouse suspension model, and experimental measurements on dependency of abrasion rate vs impacting particle size. This model predicts well the timing and the amplitude of both real-time and monsoon average abrasion along the surveyed sandstone bars. This first validation of a model for bank erosion opens large perspective for future work on channel bottom incision modelling using physical models of erosion and their time- and gravel-size-integration, with the objective to introduce more physical rules in landscape evolution models.
Attachment change processes in the early years of marriage.
Davila, J; Karney, B R; Bradbury, T N
1999-05-01
The authors examined 4 models of attachment change: a contextual model, a social-cognitive model, an individual-difference model, and a diathesis-stress model. Models were examined in a sample of newlyweds over the first 2 years of marriage, using growth curve analyses. Reciprocal processes, whereby attachment representations and interpersonal life circumstances affect one another over time, also were studied. On average, newlyweds became more secure over time. However, there was significant within-subject variability on attachment change that was predicted by intra- and interpersonal factors. Attachment representations changed in response to contextual, social-cognitive, and individual-difference factors. Reciprocal processes between attachment representations and marital variables emerged, suggesting that these factors influence one another in an ongoing way.
Xiaopeng, Q I; Liang, Wei; Barker, Laurie; Lekiachvili, Akaki; Xingyou, Zhang
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature's association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly-or 30-day-basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R 2 , mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R 2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS's merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects.
An interplanetary magnetic field ensemble at 1 AU
NASA Technical Reports Server (NTRS)
Matthaeus, W. H.; Goldstein, M. L.; King, J. H.
1985-01-01
A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Monitoring the hatch time of individual chicken embryos.
Romanini, C E B; Exadaktylos, V; Tong, Q; McGonnel, I; Demmers, T G M; Bergoug, H; Eterradossi, N; Roulston, N; Garain, P; Bahr, C; Berckmans, D
2013-02-01
This study investigated variations in eggshell temperature (T(egg)) during the hatching process of broiler eggs. Temperature sensors monitored embryo temperature by registering T(egg) every minute. Measurements carried out on a sample of 40 focal eggs revealed temperature drops between 2 to 6°C during the last 3 d of incubation. Video cameras recorded the hatching process and served as the gold standard reference for manually labeling the hatch times of chicks. Comparison between T(egg) drops and the hatch time of individuals revealed a time synchronization with 99% correlation coefficient and an absolute average time difference up to 25 min. Our findings suggest that attaching temperature sensors to eggshells is a precise tool for monitoring the hatch time of individual chicks. Individual hatch monitoring registers the biological age of chicks and facilitates an accurate and reliable means to count hatching results and manage the hatch window.
Petrides, Athena K; Bixho, Ida; Goonan, Ellen M; Bates, David W; Shaykevich, Shimon; Lipsitz, Stuart R; Landman, Adam B; Tanasijevic, Milenko J; Melanson, Stacy E F
2017-03-01
- A recent government regulation incentivizes implementation of an electronic health record (EHR) with computerized order entry and structured results display. Many institutions have also chosen to interface their EHR with their laboratory information system (LIS). - To determine the impact of an interfaced EHR-LIS on laboratory processes. - We analyzed several different processes before and after implementation of an interfaced EHR-LIS: the turnaround time, the number of stat specimens received, venipunctures per patient per day, preanalytic errors in phlebotomy, the number of add-on tests using a new electronic process, and the number of wrong test codes ordered. Data were gathered through the LIS and/or EHR. - The turnaround time for potassium and hematocrit decreased significantly (P = .047 and P = .004, respectively). The number of stat orders also decreased significantly, from 40% to 7% for potassium and hematocrit, respectively (P < .001 for both). Even though the average number of inpatient venipunctures per day increased from 1.38 to 1.62 (P < .001), the average number of preanalytic errors per month decreased from 2.24 to 0.16 per 1000 specimens (P < .001). Overall there was a 16% increase in add-on tests. The number of wrong test codes ordered was high and it was challenging for providers to correctly order some common tests. - An interfaced EHR-LIS significantly improved within-laboratory turnaround time and decreased stat requests and preanalytic phlebotomy errors. Despite increasing the number of add-on requests, an electronic add-on process increased efficiency and improved provider satisfaction. Laboratories implementing an interfaced EHR-LIS should be cautious of its effects on test ordering and patient venipunctures per day.
Pellizzer, Giuseppe; Zesiger, Pascal
2009-03-01
Children from 8 to 12 years of age drew figure-eights and ellipses at a self-chosen tempo on a digitizing tablet. Global aspects (perimeter and average speed) and local aspects (relation between instantaneous speed and curvature) of performance were analyzed across age groups and types of figures. We tested the predictions of the transformation model, which is based on the hypothesis that changing the intended direction of movement is a time-consuming process that affects the evolution in time of the movement trajectory, and compared how well it fitted the data relative to the power law. We found that the relation between speed and curvature was typically better described by the transformation model than by the power law. However, the power law provided a better description when ellipses were drawn at a fast speed. The analyses of the parameters of the transformation model indicate that processing speed increased linearly with age. In addition, the results suggest that the effects of the spring-like properties of the arm were noticeable when ellipses were drawn at a fast speed. This study indicates that both biomechanical properties and central processes have an effect on the kinematics of continuous movements and particularly on the relation between speed and curvature. However, their relative importance varies with the type of figure and average movement speed. In conclusion, the results support the hypothesis that a time-consuming process of transformation of the intended direction of movement is operating during the production of continuous movements and that this process increases in speed between 8 to 12 years of age.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2011-02-01
Individual raw datastreams from instrumentation at the Atmospheric Radiation Measurement (ARM) Climate Research Facility fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ARM Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of processed data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual datastream, site, and month for the currentmore » year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY2010 for the Southern Great Plains (SGP) site is 2097.60 hours (0.95 x 2208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1987.20 hours (0.90 x 2208) and for the Tropical Western Pacific (TWP) locale is 1876.80 hours (0.85 x 2208). The first ARM Mobile Facility (AMF1) deployment in Graciosa Island, the Azores, Portugal, continued through this quarter, so the OPSMAX time this quarter is 2097.60 hours (0.95 x 2208). The second ARM Mobile Facility (AMF2) began deployment this quarter to Steamboat Springs, Colorado. The experiment officially began November 15, but most of the instruments were up and running by November 1. Therefore, the OPSMAX time for the AMF2 was 1390.80 hours (.95 x 1464 hours) for November and December (61 days). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or datastream. Data availability reported here refers to the average of the individual, continuous datastreams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Summary. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2010, for the fixed sites. Because the AMFs operate episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. This first quarter comprises a total of 2,208 possible hours for the fixed sites and the AMF1 and 1,464 possible hours for the AMF2. The average of the fixed sites exceeded our goal this quarter. The AMF1 has essentially completed its mission and is shutting down to pack up for its next deployment to India. Although all the raw data from the operational instruments are in the Archive for the AMF2, only the processed data are tabulated. Approximately half of the AMF2 instruments have data that was fully processed, resulting in the 46% of all possible data made available to users through the Archive for this first quarter. Typically, raw data is not made available to users unless specifically requested.« less
Electronic Structure Methods Based on Density Functional Theory
2010-01-01
0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing...chapter in the ASM Handbook , Volume 22A: Fundamentals of Modeling for Metals Processing, 2010. PAO Case Number: 88ABW-2009-3258; Clearance Date: 16 Jul...are represented using a linear combination, or basis, of plane waves. Over time several methods were developed to avoid the large number of planewaves
Evolution of Microstructure in a Nickel-based Superalloy as a Function of Ageing Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Ren; Smith, Gregory Scott; Porcar, L.
2011-01-01
An experimental investigation, combining synchrotron X-ray powder diffraction, small-angle neutron-scattering, and transmission electron microscopy, has been undertaken to study the microstructure of nanoprecipitates in a nickel-based superalloy. Upon increasing the ageing time during a heat-treatment process, the average size of the precipitates first decreases before changing to a monotonical growth stage. Possible reasons for this observed structural evolution, which is predicted thermodynamically, are suggested.
Plate tectonics and continental basaltic geochemistry throughout Earth history
NASA Astrophysics Data System (ADS)
Keller, Brenhin; Schoene, Blair
2018-01-01
Basaltic magmas constitute the primary mass flux from Earth's mantle to its crust, carrying information about the conditions of mantle melting through which they were generated. As such, changes in the average basaltic geochemistry through time reflect changes in underlying parameters such as mantle potential temperature and the geodynamic setting of mantle melting. However, sampling bias, preservation bias, and geological heterogeneity complicate the calculation of representative average compositions. Here we use weighted bootstrap resampling to minimize sampling bias over the heterogeneous rock record and obtain maximally representative average basaltic compositions through time. Over the approximately 4 Ga of the continental rock record, the average composition of preserved continental basalts has evolved along a generally continuous trajectory, with decreasing compatible element concentrations and increasing incompatible element concentrations, punctuated by a comparatively rapid transition in some variables such as La/Yb ratios and Zr, Nb, and Ti abundances approximately 2.5 Ga ago. Geochemical modeling of mantle melting systematics and trace element partitioning suggests that these observations can be explained by discontinuous changes in the mineralogy of mantle partial melting driven by a gradual decrease in mantle potential temperature, without appealing to any change in tectonic process. This interpretation is supported by the geochemical record of slab fluid input to continental basalts, which indicates no long-term change in the global proportion of arc versus non-arc basaltic magmatism at any time in the preserved rock record.
Factors affecting pregnancy length and phases of parturition in Martina Franca jennies.
Carluccio, Augusto; Gloria, Alessia; Veronesi, Maria Cristina; De Amicis, Ippolito; Noto, Federico; Contri, Alberto
2015-09-01
The knowledge of normal pregnancy length, duration of parturition stages, and neonatal early adaptation is mandatory for a rationale management of birth, especially in monotocous species with long gestations. This study reports data obtained from a large number of Martina Franca jennies with normal healthy pregnancies and spontaneous eutocic delivery of a mature, healthy, and viable donkey foal. Pregnancy lasts, on average, 371 days, and only the fetal gender significantly determines pregnancy length, with longer gestations observed in jennies bearing male fetuses. Other factors such as the year of foaling, month of ovulation, month of parturition, birth weight of the foal, and age of the jenny did not influence pregnancy length. The first stage of foaling lasted on average 65 minutes, the second stage 19 minutes, and the third stage 58 minutes. The umbilical cord ruptured on average within 16 minutes after birth; the foal stood up in 61 minutes and suckled the colostrum for the first time within 10 minutes after birth and again after 143 minutes of birth; meconium passage occurred, on average, 86 minutes after birth. Although times reported for the process of foaling are similar to data reported for the horse, the times for early neonatal donkey foal adaptation are longer as compared to the horse foal. Copyright © 2015 Elsevier Inc. All rights reserved.
On the non-stationary generalized Langevin equation
NASA Astrophysics Data System (ADS)
Meyer, Hugues; Voigtmann, Thomas; Schilling, Tanja
2017-12-01
In molecular dynamics simulations and single molecule experiments, observables are usually measured along dynamic trajectories and then averaged over an ensemble ("bundle") of trajectories. Under stationary conditions, the time-evolution of such averages is described by the generalized Langevin equation. By contrast, if the dynamics is not stationary, it is not a priori clear which form the equation of motion for an averaged observable has. We employ the formalism of time-dependent projection operator techniques to derive the equation of motion for a non-equilibrium trajectory-averaged observable as well as for its non-stationary auto-correlation function. The equation is similar in structure to the generalized Langevin equation but exhibits a time-dependent memory kernel as well as a fluctuating force that implicitly depends on the initial conditions of the process. We also derive a relation between this memory kernel and the autocorrelation function of the fluctuating force that has a structure similar to a fluctuation-dissipation relation. In addition, we show how the choice of the projection operator allows us to relate the Taylor expansion of the memory kernel to data that are accessible in MD simulations and experiments, thus allowing us to construct the equation of motion. As a numerical example, the procedure is applied to Brownian motion initialized in non-equilibrium conditions and is shown to be consistent with direct measurements from simulations.
Saathoff, April M; MacDonald, Ryan; Krenzischek, Erundina
2018-03-01
The objective of this study was to evaluate the impact of specimen collection technology implementation featuring computerized provider order entry, positive patient identification, bedside specimen label printing, and barcode scanning on the reduction of mislabeled specimens and collection turnaround times in the emergency, medical-surgical, critical care, and maternal child health departments at a community teaching hospital. A quantitative analysis of a nonrandomized, pre-post intervention study design evaluated the statistical significance of reduction of mislabeled specimen percentages and collection turnaround times affected by the implementation of specimen collection technology. Mislabeled specimen percentages in all areas decreased from an average of 0.020% preimplementation to an average of 0.003% postimplementation, with a P < .001. Collection turnaround times longer than 60 minutes decreased after the implementation of specimen collection technology by an average of 27%, with a P < .001. Specimen collection and identification errors are a significant problem in healthcare, contributing to incorrect diagnoses, delayed care, lack of essential treatments, and patient injury or death. Collection errors can also contribute to an increased length of stay, increased healthcare costs, and decreased patient satisfaction. Specimen collection technology has structures in place to prevent collection errors and improve the overall efficiency of the specimen collection process.
NASA Astrophysics Data System (ADS)
Sharma, Vikas; Parey, Anand
2017-02-01
In the purview of fluctuating speeds, gear fault diagnosis is challenging due to dynamic behavior of forces. Various industrial applications employing gearbox which operate under fluctuating speed conditions. For diagnostics of a gearbox, various vibrations based signal processing techniques viz FFT, time synchronous averaging and time-frequency based wavelet transform, etc. are majorly employed. Most of the time, theories about data or computational complexity limits the use of these methods. In order to perform fault diagnosis of a gearbox for fluctuating speeds, frequency domain averaging (FDA) of intrinsic mode functions (IMFs) after their dynamic time warping (DTW) has been done in this paper. This will not only attenuate the effect of fluctuating speeds but will also extract the weak fault feature those masked in vibration signal. Experimentally signals were acquired from Drivetrain Diagnostic Simulator for different gear health conditions i.e., healthy pinion, pinion with tooth crack, chipped tooth and missing tooth and were analyzed for the different fluctuating profiles of speed. Kurtosis was calculated for warped IMFs before DTW and after DTW of the acquired vibration signals. Later on, the application of FDA highlights the fault frequencies present in the FFT of faulty gears. The result suggests that proposed approach is more effective towards the fault diagnosing with fluctuating speed.
Poissant, Lise; Pereira, Jennifer; Tamblyn, Robyn; Kawasumi, Yuko
2005-01-01
A systematic review of the literature was performed to examine the impact of electronic health records (EHRs) on documentation time of physicians and nurses and to identify factors that may explain efficiency differences across studies. In total, 23 papers met our inclusion criteria; five were randomized controlled trials, six were posttest control studies, and 12 were one-group pretest-posttest designs. Most studies (58%) collected data using a time and motion methodology in comparison to work sampling (33%) and self-report/survey methods (8%). A weighted average approach was used to combine results from the studies. The use of bedside terminals and central station desktops saved nurses, respectively, 24.5% and 23.5% of their overall time spent documenting during a shift. Using bedside or point-of-care systems increased documentation time of physicians by 17.5%. In comparison, the use of central station desktops for computerized provider order entry (CPOE) was found to be inefficient, increasing the work time from 98.1% to 328.6% of physician's time per working shift (weighted average of CPOE-oriented studies, 238.4%). Studies that conducted their evaluation process relatively soon after implementation of the EHR tended to demonstrate a reduction in documentation time in comparison to the increases observed with those that had a longer time period between implementation and the evaluation process. This review highlighted that a goal of decreased documentation time in an EHR project is not likely to be realized. It also identified how the selection of bedside or central station desktop EHRs may influence documentation time for the two main user groups, physicians and nurses.
Poissant, Lise; Pereira, Jennifer; Tamblyn, Robyn; Kawasumi, Yuko
2005-01-01
A systematic review of the literature was performed to examine the impact of electronic health records (EHRs) on documentation time of physicians and nurses and to identify factors that may explain efficiency differences across studies. In total, 23 papers met our inclusion criteria; five were randomized controlled trials, six were posttest control studies, and 12 were one-group pretest-posttest designs. Most studies (58%) collected data using a time and motion methodology in comparison to work sampling (33%) and self-report/survey methods (8%). A weighted average approach was used to combine results from the studies. The use of bedside terminals and central station desktops saved nurses, respectively, 24.5% and 23.5% of their overall time spent documenting during a shift. Using bedside or point-of-care systems increased documentation time of physicians by 17.5%. In comparison, the use of central station desktops for computerized provider order entry (CPOE) was found to be inefficient, increasing the work time from 98.1% to 328.6% of physician's time per working shift (weighted average of CPOE-oriented studies, 238.4%). Studies that conducted their evaluation process relatively soon after implementation of the EHR tended to demonstrate a reduction in documentation time in comparison to the increases observed with those that had a longer time period between implementation and the evaluation process. This review highlighted that a goal of decreased documentation time in an EHR project is not likely to be realized. It also identified how the selection of bedside or central station desktop EHRs may influence documentation time for the two main user groups, physicians and nurses. PMID:15905487
Tomberg, C; Desmedt, J E
1999-07-29
Brain mechanisms involved in selective attention in humans can be studied by measures of regional blood flow and metabolism (by positron emission tomography) which help identify the various locations with enhanced activities over a period of time of seconds. The physiological measures provided by scalp-recorded brain electrical potentials have a better resolution (milliseconds) and can reveal the actual sequences of distinct neural events and their precise timing. We studied selective attention to sensory inputs from fingers because the brain somatic representations are deployed over the brain convexity under the scalp thereby making it possible to assess distinct stages of cortical processing and representation through their characteristic scalp topographies. In the electrical response to a finger input attended by the subject, the well-known P300 manifests a widespread inhibitory mechanism which is released after a target stimulus has been identified. P300 is preceded by distinct cognitive electrogeneses such as P40, P100 and N140 which can be differentiated from the control (obligatory) profile by superimposition or electronic subtraction. The first cortical response N20 is stable across conditions, suggesting that the first afferent thalamocortical volley is not affected by selective attention. At the next stage of modality-specific cortex in which the sensory features are processed and represented, responses were enhanced (cognitive P40) only a very few milliseconds after arrival of the afferent volley at the cortex, thus documenting a remarkable precocity of attention gain control in the somatic modality. The physiology of selective attention also provides useful cues in relation to non-target inputs which the subject must differentiate in order to perform the task. When having to tell fingers apart, the brain strategy for non-target fingers is not to inhibit or filter them out, but rather to submit their input to several processing operations that are actually enhanced when the discrimination from targets becomes more difficult. While resolving a number of such issues, averaged data cannot disclose the flexibility of brain mechanisms nor the detailed features of cognitive electrogeneses because response variations along time have been ironed out by the bulk treatment. We attempted to address the remarkable versatility of humans in dealing with their sensory environment under ecological conditions by studying single non-averaged responses. We identified distinct cognitive P40, P100, N140 and P300 electrogeneses in spite of the noise by numerically assessing their characteristic scalp topography signatures. Single-trial data suggest reconsiderations of current psychophysiological issues. The study of non-averaged responses can clarify issues raised by averaging studies as illustrated by our recent study of cognitive brain potentials for finger stimuli which remain outside the subject's awareness. This has to do with the physiological basis of the 'cognitive unconscious', that is, current mental processes lying on the fringe or outside of phenomenal awareness and voluntary control, but which can influence ongoing behaviour. Averaged data suggest that, in selective auditory attention, the subject may not notice mild concomitant finger inputs. The study of non-averaged responses documents the optional and independent occurrence of the cognitive P40, P100 and N140 (but not P300) electrogeneses while the finger inputs remain outside phenomenal awareness. These results suggest that the subject unconsciously assigns limited cognitive resources to distinct somatic cortical areas thereby submitting finger inputs to an intermittent curtailed surveillance which can remain on the fringe or outside consciousness. The study of cognitive electrogeneses in single non-averaged responses is making possible a neurophysiology of cognition in real time.
Liu, Xiaoqian; Tong, Yan; Wang, Jinyu; Wang, Ruizhen; Zhang, Yanxia; Wang, Zhimin
2011-11-01
Fufang Kushen injection was selected as the model drug, to optimize its alcohol-purification process and understand the characteristics of particle sedimentation process, and to investigate the feasibility of using process analytical technology (PAT) on traditional Chinese medicine (TCM) manufacturing. Total alkaloids (calculated by matrine, oxymatrine, sophoridine and oxysophoridine) and macrozamin were selected as quality evaluation markers to optimize the process of Fufang Kushen injection purification with alcohol. Process parameters of particulate formed in the alcohol-purification, such as the number, density and sedimentation velocity, were also determined to define the sedimentation time and well understand the process. The purification process was optimized as that alcohol is added to the concentrated extract solution (drug material) to certain concentration for 2 times and deposited the alcohol-solution containing drug-material to sediment for some time, i.e. 60% alcohol deposited for 36 hours, filter and then 80% -90% alcohol deposited for 6 hours in turn. The content of total alkaloids was decreased a little during the depositing process. The average settling time of particles with the diameters of 10, 25 microm were 157.7, 25.2 h in the first alcohol-purified process, and 84.2, 13.5 h in the second alcohol-purified process, respectively. The optimized alcohol-purification process remains the marker compositions better and compared with the initial process, it's time saving and much economy. The manufacturing quality of TCM-injection can be controlled by process. PAT pattern must be designed under the well understanding of process of TCM production.
Substorm Occurrence and Intensity Associated With Three Types of Solar Wind Structure
NASA Astrophysics Data System (ADS)
Liou, Kan; Sotirelis, Thomas; Richardson, Ian
2018-01-01
This paper presents the results of a study of the characteristics of substorms that occurred during three distinct types of solar wind: coronal mass ejection (CME) associated, high-speed streams (HSS), and slow solar wind (SSW). A total number of 53,468 geomagnetic substorm onsets from 1983 to 2009 is used and sorted by the three solar wind types. It is found that the probability density function (PDF) of the intersubstorm time can be fitted by the combination of a dominant power law with an exponential cutoff component and a minor lognormal component, implying that substorms are associated with two distinctly different dynamical processes corresponding, perhaps, to the "externally driven" and "internally driven" processes, respectively. We compare substorm frequency and intensity associated with the three types of solar wind. It is found that the intersubstorm time is the longest during SSW and shortest during CME intervals. The averaged intersubstorm time for the internally driven substorms is 3.13, 3.15, and 7.96 h for CME, HSS, and SSW, respectively. The substorm intensity PDFs, as represented by the peak value of |
NASA Astrophysics Data System (ADS)
Cohen, J. B.; Xi, X.; Wang, C.
2012-12-01
Black Carbon (BC) and other absorbing aerosols uniquely impact the climate system by both scattering and absorbing solar radiation, leading to simultaneous heating and cooling of the climate system. A critical understanding of the emissions, processing, transport, and removal of these aerosols are necessary to increase our understanding of their impacts on climate system. However, BC is tricky to model: it has a mostly anthropogenic origin that is highly variable in both space and time. Furthermore, its atmospheric chemical and physical processing involves interaction with third-party chemical species. Finally, there is a strong correlation between uncertainty in prediction of the primary removal mechanism, precipitation, and those regions having the highest emissions, such as Monsoon regions of Asia. Recent work using a coupled climate/radiation/aerosol/urbanization model, data of BC concentrations and remotely sensed AAODs from more than 100 different sites, and a Kalman Filter, has lead to an average estimate of the BC average and uncertainty range of emissions. These average results ranged from about 200% to 300% the emissions currently used by the IPCC, AEROCOM, and GFED. The differences in the modeled concentrations, AAODs, radiative forcings, and climate response between these annual average different emissions levels, as well as the error bounds associated with the Kalman Filter emissions has been explored and will be summarized. Additionally, since absorbing aerosols are regionally and temporally non-uniform, an improved comparison between these differences will be highlighted using an additional data source: MISR AOD and a new analysis technique to mathematically constrain and identify unique temporally and spatially varying properties. These new constraints will be further combined with model runs under the different emissions scenarios to test the impacts of both annual average as well as more realistic cases of large-scale, season-to-season, and year-to-year variations. These results will be displayed, compared against measurements, and the influence of the time-varying component quantified both globally as well as over two regions exhibiting such an influence. It is hoped that such quantification can lead to further improvement of the emissions estimates and their impact on the climate system.
Highly-resolved numerical simulations of bed-load transport in a turbulent open-channel flow
NASA Astrophysics Data System (ADS)
Vowinckel, Bernhard; Kempe, Tobias; Nikora, Vladimir; Jain, Ramandeep; Fröhlich, Jochen
2015-11-01
The study presents the analysis of phase-resolving Direct Numerical Simulations of a horizontal turbulent open-channel flow laden with a large number of spherical particles. These particles have a mobility close to their threshold of incipient motion andare transported in bed-load mode. The coupling of the fluid phase with the particlesis realized by an Immersed Boundary Method. The Double-Averaging Methodology is applied for the first time convolutingthe data into a handy set of quantities averaged in time and space to describe the most prominent flow features.In addition, a systematic study elucidatesthe impact of mobility and sediment supply on the pattern formation of particle clusters ina very large computational domain. A detailed description of fluid quantities links the developed particle patterns to the enhancement of turbulence and to a modified hydraulic resistance. Conditional averaging isapplied toerosion events providingthe processes involved inincipient particle motion. Furthermore, the detection of moving particle clusters as well as their surrounding flow field is addressedby a a moving frameanalysis. Funded by German Research Foundation (DFG), project FR 1593/5-2, computational time provided by ZIH Dresden, Germany, and JSC Juelich, Germany.
NASA Astrophysics Data System (ADS)
Basbug, Salur; Papadakis, George; Vassilicos, Christos
2015-11-01
The flow field inside a stirred tank is obtained by means of direct numerical simulation based on finite volume method at Re =500. Two different types of four-bladed radial impellers are considered: the first one is a regular type with rectangular blades and the second one is a modified version of the former with irregular blade edges, having the same thickness and the surface area. The shaft power is averaged over more than sixty revolutions and the comparison between the two cases shows that the impeller with irregular blades has lower energy consumption. Moreover, a passive scalar is injected into the vessel for a quarter period of revolution and the scalar transport equation is solved to investigate the mixing times. The coefficient of variation of the passive scalar is averaged over the whole volume in order to obtain a quantitative indicator of the mixing progress. The homogenization curves depend on the instantaneous flow conditions due to the transient nature of the mixing process, therefore multiple curves are averaged to obtain a representative result. There are indications that irregular blades can decrease mixing time with respect to regular ones.
Zhang, Zhenwei; VanSwearingen, Jessie; Brach, Jennifer S.; Perera, Subashan
2016-01-01
Human gait is a complex interaction of many nonlinear systems and stride intervals exhibit self-similarity over long time scales that can be modeled as a fractal process. The scaling exponent represents the fractal degree and can be interpreted as a biomarker of relative diseases. The previous study showed that the average wavelet method provides the most accurate results to estimate this scaling exponent when applied to stride interval time series. The purpose of this paper is to determine the most suitable mother wavelet for the average wavelet method. This paper presents a comparative numerical analysis of sixteen mother wavelets using simulated and real fractal signals. Simulated fractal signals were generated under varying signal lengths and scaling exponents that indicate a range of physiologically conceivable fractal signals. The five candidates were chosen due to their good performance on the mean square error test for both short and long signals. Next, we comparatively analyzed these five mother wavelets for physiologically relevant stride time series lengths. Our analysis showed that the symlet 2 mother wavelet provides a low mean square error and low variance for long time intervals and relatively low errors for short signal lengths. It can be considered as the most suitable mother function without the burden of considering the signal length. PMID:27960102
3D Printed Surgical Instruments: The Design and Fabrication Process.
George, Mitchell; Aroom, Kevin R; Hawes, Harvey G; Gill, Brijesh S; Love, Joseph
2017-01-01
3D printing is an additive manufacturing process allowing the creation of solid objects directly from a digital file. We believe recent advances in additive manufacturing may be applicable to surgical instrument design. This study investigates the feasibility, design and fabrication process of usable 3D printed surgical instruments. The computer-aided design package SolidWorks (Dassault Systemes SolidWorks Corp., Waltham MA) was used to design a surgical set including hemostats, needle driver, scalpel handle, retractors and forceps. These designs were then printed on a selective laser sintering (SLS) Sinterstation HiQ (3D Systems, Rock Hill SC) using DuraForm EX plastic. The final printed products were evaluated by practicing general surgeons for ergonomic functionality and performance, this included simulated surgery and inguinal hernia repairs on human cadavers. Improvements were identified and addressed by adjusting design and build metrics. Repeated manufacturing processes and redesigns led to the creation of multiple functional and fully reproducible surgical sets utilizing the user feedback of surgeons. Iterative cycles including design, production and testing took an average of 3 days. Each surgical set was built using the SLS Sinterstation HiQ with an average build time of 6 h per set. Functional 3D printed surgical instruments are feasible. Advantages compared to traditional manufacturing methods include no increase in cost for increased complexity, accelerated design to production times and surgeon specific modifications.
Enhancement of biogas production by co-digestion of potato pulp with cow manure in a CSTR system.
Sanaei-Moghadam, Akbar; Abbaspour-Fard, Mohammad Hossein; Aghel, Hasan; Aghkhani, Mohammad Hossein; Abedini-Torghabeh, Javad
2014-08-01
Anaerobic digestion (AD) process is a well-established method to generate energy from the organic wastes both from the environmental and economical perspectives. The purpose of present study is to evaluate energy production from potato wastes by incorporating cow manure into the process. Firstly, a laboratory pilot of one-stage biogas production was designed and built according to continuously stirred tank reactor (CSTR) system. The setup was able to automatically control the environmental conditions of the process including temperature, duration, and rate of stirring. AD experiment was exclusively performed on co-digestion of potato peel (PP) and cow manure (CM) in three levels of mixing ratio including 20:80, 50:50, 80:20 (PP:CM), and 0:100 as control treatment based on the volatile solid (VS) weight without adding initial inoculums. After hydraulic retention time (HRT) of 50 days on average 193, 256, 348, and 149 norm liter (LN) (kg VS)(-1), methane was produced for different mixing ratios, respectively. Statistical analysis shows that these gas productions are significantly different. The average energy was determined based on the produced methane which was about 2.8 kWh (kg VS)(-1), implying a significant energy production potential. The average chemical oxygen demand (COD) removal of treatments was about 61%, showing that it can be leached significantly with high organic matter by the employed pilot. The energy efficiency of 92% of the process also showed the optimum control of the process by the pilot.
Palaeoflood records of the last three centuries from the Pyeongchang and Dong rivers, South Korea
NASA Astrophysics Data System (ADS)
Kim, Song-Hyun; Tanaka, Yukiya
2017-08-01
Slackwater deposits are paleostage indicators in paleoflood hydrology that have commonly been used in numerous studies to estimate the magnitude and frequency of flood events and to reconstruct paleoenvironments and paleohydrology. In this study, individual flood events along the Pyeongchang (PC) and Dong (D) rivers of South Korea were differentiated on the basis of changes in deposit color, grain size, organic content, and the existence of laminations. Based on 137Cs and 14C chronological data, 19 flood events have occurred at site PC since 1720, while 17 flood events have occurred at site D since 1815. At the PC study site, the average time interval between the flood events is 15.4 years and the average sediment accumulation rate is 9.7 mm/y. At study site D, the average time interval is 15 years and the average sediment accumulation rate is 11.6 mm/y. These high sediment accumulation rates are consistent with those in humid areas (e.g., Japan) and explain how slackwater deposits can be preserved despite erosion, bioturbation, and pedogenic processes. Based on the results, the study area was divided into three periods: (1) a relatively wet period (1720-1810 CE), (2) a dry period (1810-1960 CE), and (3) a wet period (1960 CE-present). The flood time intervals and average sediment accumulation rates of the eighteenth century were shorter and higher than those of the nineteenth century. This suggests that on the Korean Peninsula, the paleoclimate of the Little Ice Age (LIA) was wetter than that from the nineteenth century to the early twentieth century.
Dias, Philipe A; Dunkel, Thiemo; Fajado, Diego A S; Gallegos, Erika de León; Denecke, Martin; Wiedemann, Philipp; Schneider, Fabio K; Suhr, Hajo
2016-06-11
In the activated sludge process, problems of filamentous bulking and foaming can occur due to overgrowth of certain filamentous bacteria. Nowadays, these microorganisms are typically monitored by means of light microscopy, commonly combined with staining techniques. As drawbacks, these methods are susceptible to human errors, subjectivity and limited by the use of discontinuous microscopy. The in situ microscope appears as a suitable tool for continuous monitoring of filamentous bacteria, providing real-time examination, automated analysis and eliminating sampling, preparation and transport of samples. In this context, a proper image processing algorithm is proposed for automated recognition and measurement of filamentous objects. This work introduces a method for real-time evaluation of images without any staining, phase-contrast or dilution techniques, differently from studies present in the literature. Moreover, we introduce an algorithm which estimates the total extended filament length based on geodesic distance calculation. For a period of twelve months, samples from an industrial activated sludge plant were weekly collected and imaged without any prior conditioning, replicating real environment conditions. Trends of filament growth rate-the most important parameter for decision making-are correctly identified. For reference images whose filaments were marked by specialists, the algorithm correctly recognized 72 % of the filaments pixels, with a false positive rate of at most 14 %. An average execution time of 0.7 s per image was achieved. Experiments have shown that the designed algorithm provided a suitable quantification of filaments when compared with human perception and standard methods. The algorithm's average execution time proved its suitability for being optimally mapped into a computational architecture to provide real-time monitoring.
Tsang, Hamilton C; Garcia, Adam; Scott, Robert; Lancaster, David; Geary, Dianne; Nguyen, Anh-Thu; Shankar, Raina; Buchanan, Leslie; Pham, Tho D
2018-05-16
The ordering process at Stanford Health Care involved twice-daily shipments predicated upon current stock levels from the blood center to the hospital transfusion service. Manual census determination is time consuming and error prone. We aimed to enhance inventory management by developing an informatics platform to streamline the ordering process and reallocate staff productivity. The general inventory accounts for more than 50 product categories based on characteristics including component, blood type, irradiation status, and cytomegalovirus serology status. Over a 5-month calibration period, inventory levels were determined algorithmically and electronically. An in-house software program was created to determine inventory levels, optimize the electronic ordering process, and reduce labor time. A 3-month pilot period was implemented using this program. This system showed noninferiority while saving labor time. The average weekly transfused:stocked ratios for cryoprecipitate, plasma, and red blood cells, respectively, were 1.03, 1.21, and 1.48 before the pilot period, compared with 0.88, 1.17, and 1.40 during (p = 0.28). There were 27 (before) and 31 (during) average STAT units ordered per week (p = 0.86). The number of monthly wasted products due to expiration was 226 (before) and 196 (during) units, respectively (p = 0.28). An estimated 7 hours per week of technologist time was reallocated to other tasks. An in-house electronic ordering system can enhance information fidelity, reallocate and optimize valuable staff productivity, and further standardize ordering. This system showed noninferiority to the labor-intensive manual system while freeing up over 360 hours of staff time per year. © 2018 AABB.
Silva, Filipa V M; Martins, Rui C; Silva, Cristina L M
2003-01-01
Cupuaçu (Theobroma grandiflorum) is an Amazonian tropical fruit with a great economic potential. Pasteurization, by a hot-filling technique, was suggested for the preservation of this fruit pulp at room temperature. The process was implemented with local communities in Brazil. The process was modeled, and a computer program was written in Turbo Pascal. The relative importance among the pasteurization process variables (initial product temperature, heating rate, holding temperature and time, container volume and shape, cooling medium type and temperature) on the microbial target and quality was investigated, by performing simulations according to a screening factorial design. Afterward, simulations of the different processing conditions were carried out. The holding temperature (T(F)) and time (t(hold)) affected pasteurization value (P), and the container volume (V) influenced largely the quality parameters. The process was optimized for retail (1 L) and industrial (100 L) size containers, by maximizing volume average quality in terms of color lightness and sensory "fresh notes" and minimizing volume average total color difference and sensory "cooked notes". Equivalent processes were designed and simulated (P(91)( degrees )(C) = 4.6 min on Alicyclobacillus acidoterrestris spores) and final quality (color, flavor, and aroma attributes) was evaluated. Color was slightly affected by the pasteurization processes, and few differences were observed between the six equivalent treatments designed (T(F) between 80 and 97 degrees C). T(F) >/= 91 degrees C minimized "cooked notes" and maximized "fresh notes" of cupuaçu pulp aroma and flavor for 1 L container. Concerning the 100 L size, the "cooked notes" development can be minimized with T(F) >/= 91 degrees C, but overall the quality was greatly degraded as a result of the long cooling times. A more efficient method to speed up the cooling phase was recommended, especially for the industrial size of containers.
2013-05-07
warfare qualifications SWSCO and NASO. Next, LCDR Saucedo reported to the Fleet Industrial Supply Center, San Diego, where she assumed the duties as the...16 D. NAVAL TACTICAL COMMAND SUPPORT SYSTEM .................. 17 1. Operational Maintenance ...21 4. Method
7 CFR Exhibit B to Subpart I of... - Evaluation Report of Self-Help Technical Assistance (TA) Grants
Code of Federal Regulations, 2011 CFR
2011-01-01
...) f. Number of TA employees: (16) 6. a. Average time needed to construct a single house: (17) b... following adversely affect the Grantee's ability to accomplish program objectives? YES NO TA Staff Turnover ____ ____ FmHA Staff Turnover ____ ____ Bad Weather ____ ____ Loan Processing Delays ____ ____ Site Acquisition...
Assessing Competence in ESL: Reading.
ERIC Educational Resources Information Center
Oller, John W., Jr.
Results from research with eye movement photography (EMP) are discussed with a view to defining differences between native-speaker and non-native reading processes. The greatest contrast is in terms of the duration of eye fixations; non-native speakers at the college level require about as much time for a fixation as an average native-speaker at…
7 CFR Exhibit B to Subpart I of... - Evaluation Report of Self-Help Technical Assistance (TA) Grants
Code of Federal Regulations, 2010 CFR
2010-01-01
...) f. Number of TA employees: (16) 6. a. Average time needed to construct a single house: (17) b... following adversely affect the Grantee's ability to accomplish program objectives? YES NO TA Staff Turnover ____ ____ FmHA Staff Turnover ____ ____ Bad Weather ____ ____ Loan Processing Delays ____ ____ Site Acquisition...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... usage flow rate for cleaning processes at a new source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate NSPS Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) BOD5 49 22 Oil and Grease 71 17 TSS 117 36 pH...
40 CFR 463.24 - New source performance standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... usage flow rate for cleaning processes at a new source times the following pollutant concentrations: Subpart B [Cleaning water] Concentration used to calculate NSPS Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) BOD5 49 22 Oil and Grease 71 17 TSS 117 36 pH...
NASA Technical Reports Server (NTRS)
Bertelrud, Arild; delaTova, Geva; Hamory, Philip J.; Young, Ronald; Noffz, Gregory K.; Dodson, Michael; Graves, Sharon S.; Diamond, John K.; Bartlett, James E.; Noack, Robert;
2000-01-01
In a recent flight experiment to study hypersonic crossflow transition, boundary layer characteristics were documented. A smooth steel glove was mounted on the first stage delta wing of Orbital Sciences Corporation's Pegasus (R) launch vehicle and was flown at speeds of up to Mach 8 and altitudes of up to 250,000 ft. The wing-glove experiment was flown as a secondary payload off the coast of Florida in October 1998. This paper describes the measurement system developed. Samples of the results obtained for different parts of the trajectory are included to show the characteristics and quality of the data. Thermocouples and pressure sensors (including Preston tubes, Stanton tubes, and a "probeless" pressure rake showing boundary layer profiles) measured the time-averaged flow. Surface hot-films and high-frequency pressure transducers measured flow dynamics. Because the vehicle was not recoverable, it was necessary to design a system for real-time onboard processing and transmission. Onboard processing included spectral averaging. The quality and consistency of data obtained was good and met the experiment requirements.
Optimization of Turbine Rim Seals
NASA Technical Reports Server (NTRS)
Wagner, J. H.; Tew, D. E.; Stetson, G. M.; Sabnis, J. S.
2006-01-01
Experiments are being conducted to gain an understanding of the physics of rim scale cavity ingestion in a turbine stage with the high-work, single-stage characteristics envisioned for Advanced Subsonic Transport (AST) aircraft gas turbine engines fo the early 21st century. Initial experimental measurements to be presented include time-averaged turbine rim cavity and main gas path static pressure measurements for rim seal coolant to main gas path mass flow ratios between 0 and 0.02. The ultimate objective of this work is develop improved rim seal design concepts for use in modern high-work, single sage turbines n order to minimize the use of secondary coolant flow. Toward this objective the time averaged and unsteady data to be obtained in these experiments will be used to 1) Quantify the impact of the rim cavity cooling air on the ingestion process. 2) Quantify the film cooling benefits of the rim cavity purge flow in the main gas path. 3) Quantify the impact of the cooling air on turbine efficiency. 4) Develop/evaluate both 3D CFD and analytical models of the ingestion/cooling process.
NASA Astrophysics Data System (ADS)
Moritz, R. E.
2005-12-01
The properties, distribution and temporal variation of sea-ice are reviewed for application to problems of ice-atmosphere chemical processes. Typical vertical structure of sea-ice is presented for different ice types, including young ice, first-year ice and multi-year ice, emphasizing factors relevant to surface chemistry and gas exchange. Time average annual cycles of large scale variables are presented, including ice concentration, ice extent, ice thickness and ice age. Spatial and temporal variability of these large scale quantities is considered on time scales of 1-50 years, emphasizing recent and projected changes in the Arctic pack ice. The amount and time evolution of open water and thin ice are important factors that influence ocean-ice-atmosphere chemical processes. Observations and modeling of the sea-ice thickness distribution function are presented to characterize the range of variability in open water and thin ice.
Implementation of lean manufacturing for frozen fish process at PT. XYZ
NASA Astrophysics Data System (ADS)
Setiyawan, D. T.; Pertiwijaya, H. R.; Effendi, U.
2018-03-01
PT. XYZ is a company specialized in the processing of fishery products particularly in frozen fish fillet. The purpose of this research was to identify the type of waste and determine the recommendations of minimizing waste Lean manufacturing approach was used in the identification of waste by describing the Value Stream Mapping (VSM) and selecting tools in the Value Stream Analysis Tools (VALSAT). The results of this research showed that the highest waste that generated was the defect of leak packaging on fillet products with an average of 1.21%. In addition to defect, other insufficiencies were found such as: unnecessary motion, unnecessary overhead, and waiting time. Recommendations for improvements that given include reduction of time at several stages of the process, making production schedules, and conducting regular machine maintenance. VSM analysis shows reduced lead time of 582.04 minutes to 572.01 minutes.
NASA Astrophysics Data System (ADS)
E Alekseev, A.; Tezadov, Ya A.; Potapov, V. T.
2017-05-01
In the present paper we perform, for the first time, the analysis of the average intensity noise power level at the output of a coherent phase-sensitive optical time-domain reflectometer (phase-OTDR) with a semiconductor laser source. The origin of the considered intensity noise lies in random phase fluctuations of a semiconductor laser source field. These phase fluctuations are converted to intensity noise in the process of interference of backscattered light. This intensity noise inevitably emerges in every phase-OTDR spatial channel and limits its sensitivity to external phase actions. The analysis of intensity noise in a phase-OTDR was based on the study of a fiber scattered-light interferometer (FSLI) which is treated as the constituent part of OTDR. When considered independently, FSLI has a broad intensity noise spectrum at its output; when FSLI is treated as a part of a phase-OTDR, due to aliasing effect, the wide FSLI noise spectrum is folded within the spectral band, determined by the probe pulse repetition frequency. In the analysis one of the conventional phase-OTDR schemes with rectangular dual-pulse probe signal was considered, the FSLI, which corresponds to this OTDR scheme, has two scattering fiber segments with additional time delay introduced between backscattered fields. The average intensity noise power and resulting noise spectrum at the output of this FSLI are determined by the degree of coherence of the semiconductor laser source, the length of the scattering fiber segments and by the additional time delay between the scattering segments. The average intensity noise characteristics at the output of the corresponding phase-OTDR are determined by the analogous parameters: the source coherence, the lengths of the parts constituting the dual-pulse and the time interval which separates the parts of the dual-pulse. In the paper the expression for the average noise power spectral density (NPSD) at the output of FSLI was theoretically derived and experimentally verified. Based on the found average NPSD of FSLI, a simple relation connecting the phase-OTDR parameters and the limiting level of full average intensity noise power at its output was derived. This relation was verified by experimental measurement of the average noise power at the output of phase-OTDR. The limiting noise level, considered in the paper, determines the fundamental noise floor for the phase-OTDR with given parameters of the source coherence, probe pulse length and time delay between two pulses constituting the dual-pulse.
PERIODIC AUTOREGRESSIVE-MOVING AVERAGE (PARMA) MODELING WITH APPLICATIONS TO WATER RESOURCES.
Vecchia, A.V.
1985-01-01
Results involving correlation properties and parameter estimation for autogressive-moving average models with periodic parameters are presented. A multivariate representation of the PARMA model is used to derive parameter space restrictions and difference equations for the periodic autocorrelations. Close approximation to the likelihood function for Gaussian PARMA processes results in efficient maximum-likelihood estimation procedures. Terms in the Fourier expansion of the parameters are sequentially included, and a selection criterion is given for determining the optimal number of harmonics to be included. Application of the techniques is demonstrated through analysis of a monthly streamflow time series.
Conversion of cardiac performance data in analog form for digital computer entry
NASA Technical Reports Server (NTRS)
Miller, R. L.
1972-01-01
A system is presented which will reduce analog cardiac performance data and convert the results to digital form for direct entry into a commercial time-shared computer. Circuits are discussed which perform the measurement and digital conversion of instantaneous systolic and diastolic parameters from the analog blood pressure waveform. Digital averaging over a selected number of heart cycles is performed on these measurements, as well as those of flow and heart rate. The determination of average cardiac output and peripheral resistance, including trends, is the end result after processing by digital computer.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
Morabito, Francesco Carlo; Campolo, Maurizio; Mammone, Nadia; Versaci, Mario; Franceschetti, Silvana; Tagliavini, Fabrizio; Sofia, Vito; Fatuzzo, Daniela; Gambardella, Antonio; Labate, Angelo; Mumoli, Laura; Tripodi, Giovanbattista Gaspare; Gasparini, Sara; Cianci, Vittoria; Sueri, Chiara; Ferlazzo, Edoardo; Aguglia, Umberto
2017-03-01
A novel technique of quantitative EEG for differentiating patients with early-stage Creutzfeldt-Jakob disease (CJD) from other forms of rapidly progressive dementia (RPD) is proposed. The discrimination is based on the extraction of suitable features from the time-frequency representation of the EEG signals through continuous wavelet transform (CWT). An average measure of complexity of the EEG signal obtained by permutation entropy (PE) is also included. The dimensionality of the feature space is reduced through a multilayer processing system based on the recently emerged deep learning (DL) concept. The DL processor includes a stacked auto-encoder, trained by unsupervised learning techniques, and a classifier whose parameters are determined in a supervised way by associating the known category labels to the reduced vector of high-level features generated by the previous processing blocks. The supervised learning step is carried out by using either support vector machines (SVM) or multilayer neural networks (MLP-NN). A subset of EEG from patients suffering from Alzheimer's Disease (AD) and healthy controls (HC) is considered for differentiating CJD patients. When fine-tuning the parameters of the global processing system by a supervised learning procedure, the proposed system is able to achieve an average accuracy of 89%, an average sensitivity of 92%, and an average specificity of 89% in differentiating CJD from RPD. Similar results are obtained for CJD versus AD and CJD versus HC.
Implementation of ICARE learning model using visualization animation on biotechnology course
NASA Astrophysics Data System (ADS)
Hidayat, Habibi
2017-12-01
ICARE is a learning model that directly ensure the students to actively participate in the learning process using animation media visualization. ICARE have five key elements of learning experience from children and adult that is introduction, connection, application, reflection and extension. The use of Icare system to ensure that participants have opportunity to apply what have been they learned. So that, the message delivered by lecture to students can be understood and recorded by students in a long time. Learning model that was deemed capable of improving learning outcomes and interest to learn in following learning process Biotechnology with applying the ICARE learning model using visualization animation. This learning model have been giving motivation to participate in the learning process and learning outcomes obtained becomes more increased than before. From the results of student learning in subjects Biotechnology by applying the ICARE learning model using Visualization Animation can improving study results of student from the average value of middle test amounted to 70.98 with the percentage of 75% increased value of final test to be 71.57 with the percentage of 68.63%. The interest to learn from students more increasing visits of student activities at each cycle, namely the first cycle obtained average value by 33.5 with enough category. The second cycle is obtained an average value of 36.5 to good category and third cycle the average value of 36.5 with a student activity to good category.
Mao, Ke-ya; Wang, Yan; Xiao, Song-hua; Zhang, Yong-gang; Liu, Bao-wei; Wang, Zheng; Zhang, Xi-Feng; Cui, Geng; Zhang, Xue-song; Li, Peng; Mao, Ke-zheng
2013-08-01
To investigate the feasibility of minimally invasive transforaminal lumbar interbody fusion (MIS-TLIF) using hybrid internal fixation of pedicle screws and a translaminar facet screw for recurrent lumbar disc herniation. From January 2010 to December 2011, 16 recurrent lumbar disc herniation patients, 10 male and 6 female patients with an average age of 45 years (35-68 years) were treated with unilateral incision MIS-TLIF through working channel. After decompression, interbody fusion and fixation using unilateral pedicle screws, a translaminar facet screw was inserted from the same incision through spinous process and laminar to the other side facet joint. The results of perioperative parameters, radiographic images and clinical outcomes were assessed. The repeated measure analysis of variance was applied in the scores of visual analogue scale (VAS) and Oswestry disablity index (ODI). All patients MIS-TLIF were accomplished under working channel including decompression, interbody fusion and hybrid fixation without any neural complication. The average operative time was (148 ± 75) minutes, the average operative blood loss was (186 ± 226) ml, the average postoperative ambulation time was (32 ± 15) hours, and the average hospitalization time was (6 ± 4) days. The average length of incision was (29 ± 4) mm, and the average length of translaminar facets screw was (52 ± 6) mm. The mean follow-up was 16.5 months with a range of 12-24 months. The postoperative X-ray and CT images showed good position of the hybrid internal fixation, and all facets screws penetrate through facets joint. The significant improvement could be found in back pain VAS, leg pain VAS and ODI scores between preoperative 1 day and postoperative follow-up at all time-points (back pain VAS:F = 52.845, P = 0.000;leg pain VAS:F = 113.480, P = 0.000;ODI:F = 36.665, P = 0.000). Recurrent lumbar disc herniation could be treated with MIS-TLIF using hybrid fixation through unilateral incision, and the advantage including less invasion and quickly recovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2009-01-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, they calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month formore » the current year and (2) site and fiscal year (FY) dating back to 1998. The US Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1-(ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the first quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 x 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period October 1-December 31, 2008, for the fixed sites. The AMF has been deployed to China, but the data have not yet been released. The first quarter comprises a total of 2,208 hours. The average exceeded their goal this quarter.« less
Application of Ozone MBBR Process in Refinery Wastewater Treatment
NASA Astrophysics Data System (ADS)
Lin, Wang
2018-01-01
Moving Bed Biofilm Reactor (MBBR) is a kind of sewage treatment technology based on fluidized bed. At the same time, it can also be regarded as an efficient new reactor between active sludge method and the biological membrane method. The application of ozone MBBR process in refinery wastewater treatment is mainly studied. The key point is to design the ozone +MBBR combined process based on MBBR process. The ozone +MBBR process is used to analyze the treatment of concentrated water COD discharged from the refinery wastewater treatment plant. The experimental results show that the average removal rate of COD is 46.0%~67.3% in the treatment of reverse osmosis concentrated water by ozone MBBR process, and the effluent can meet the relevant standard requirements. Compared with the traditional process, the ozone MBBR process is more flexible. The investment of this process is mainly ozone generator, blower and so on. The prices of these items are relatively inexpensive, and these costs can be offset by the excess investment in traditional activated sludge processes. At the same time, ozone MBBR process has obvious advantages in water quality, stability and other aspects.
Multifractal detrending moving-average cross-correlation analysis
NASA Astrophysics Data System (ADS)
Jiang, Zhi-Qiang; Zhou, Wei-Xing
2011-07-01
There are a number of situations in which several signals are simultaneously recorded in complex systems, which exhibit long-term power-law cross correlations. The multifractal detrended cross-correlation analysis (MFDCCA) approaches can be used to quantify such cross correlations, such as the MFDCCA based on the detrended fluctuation analysis (MFXDFA) method. We develop in this work a class of MFDCCA algorithms based on the detrending moving-average analysis, called MFXDMA. The performances of the proposed MFXDMA algorithms are compared with the MFXDFA method by extensive numerical experiments on pairs of time series generated from bivariate fractional Brownian motions, two-component autoregressive fractionally integrated moving-average processes, and binomial measures, which have theoretical expressions of the multifractal nature. In all cases, the scaling exponents hxy extracted from the MFXDMA and MFXDFA algorithms are very close to the theoretical values. For bivariate fractional Brownian motions, the scaling exponent of the cross correlation is independent of the cross-correlation coefficient between two time series, and the MFXDFA and centered MFXDMA algorithms have comparative performances, which outperform the forward and backward MFXDMA algorithms. For two-component autoregressive fractionally integrated moving-average processes, we also find that the MFXDFA and centered MFXDMA algorithms have comparative performances, while the forward and backward MFXDMA algorithms perform slightly worse. For binomial measures, the forward MFXDMA algorithm exhibits the best performance, the centered MFXDMA algorithms performs worst, and the backward MFXDMA algorithm outperforms the MFXDFA algorithm when the moment order q<0 and underperforms when q>0. We apply these algorithms to the return time series of two stock market indexes and to their volatilities. For the returns, the centered MFXDMA algorithm gives the best estimates of hxy(q) since its hxy(2) is closest to 0.5, as expected, and the MFXDFA algorithm has the second best performance. For the volatilities, the forward and backward MFXDMA algorithms give similar results, while the centered MFXDMA and the MFXDFA algorithms fail to extract rational multifractal nature.
Three-dimensional numerical and experimental studies on transient ignition of hybrid rocket motor
NASA Astrophysics Data System (ADS)
Tian, Hui; Yu, Ruipeng; Zhu, Hao; Wu, Junfeng; Cai, Guobiao
2017-11-01
This paper presents transient simulations and experimental studies of the ignition process of the hybrid rocket motors (HRMs) using 90% hydrogen peroxide (HP) as the oxidizer and polymethyl methacrylate (PMMA) and Polyethylene (PE) as fuels. A fluid-solid coupling numerically method is established based on the conserved form of the three-dimensional unsteady Navier-Stokes (N-S) equations, considering gas fluid with chemical reactions and heat transfer between the fluid and solid region. Experiments are subsequently conducted using high-speed camera to record the ignition process. The flame propagation, chamber pressurizing process and average fuel regression rate of the numerical simulation results show good agreement with the experimental ones, which demonstrates the validity of the simulations in this study. The results also indicate that the flame propagation time is mainly affected by fluid dynamics and it increases with an increasing grain port area. The chamber pressurizing process begins when the flame propagation completes in the grain port. Furthermore, the chamber pressurizing time is about 4 times longer than the time of flame propagation.
Fang, Wai-Chi; Huang, Kuan-Ju; Chou, Chia-Ching; Chang, Jui-Chung; Cauwenberghs, Gert; Jung, Tzyy-Ping
2014-01-01
This is a proposal for an efficient very-large-scale integration (VLSI) design, 16-channel on-line recursive independent component analysis (ORICA) processor ASIC for real-time EEG system, implemented with TSMC 40 nm CMOS technology. ORICA is appropriate to be used in real-time EEG system to separate artifacts because of its highly efficient and real-time process features. The proposed ORICA processor is composed of an ORICA processing unit and a singular value decomposition (SVD) processing unit. Compared with previous work [1], this proposed ORICA processor has enhanced effectiveness and reduced hardware complexity by utilizing a deeper pipeline architecture, shared arithmetic processing unit, and shared registers. The 16-channel random signals which contain 8-channel super-Gaussian and 8-channel sub-Gaussian components are used to analyze the dependence of the source components, and the average correlation coefficient is 0.95452 between the original source signals and extracted ORICA signals. Finally, the proposed ORICA processor ASIC is implemented with TSMC 40 nm CMOS technology, and it consumes 15.72 mW at 100 MHz operating frequency.
Breakup phenomena of a coaxial jet in the non-dilute region using real-time X-ray radiography
NASA Astrophysics Data System (ADS)
Cheung, F. B.; Kuo, K. K.; Woodward, R. D.; Garner, K. N.
1990-07-01
An innovative approach to the investigation of liquid jet breakup processes in the near-injector region has been developed to overcome the experimental difficulties associated with optically opaque, dense sprays. Real-time X-ray radiography (RTR) has been employed to observe the inner structure and breakup phenomena of coaxial jets. In the atomizing regime, droplets much smaller than the exit diameter are formed beginning essentially at the injector exit. Through the use of RTR, the instantaneous contour of the liquid core was visualized. Experimental results consist of controlled-exposure digital video images of the liquid jet breakup process. Time-averaged video images have also been recorded for comparison. A digital image processing system is used to analyze the recorded images by creating radiance level distributions of the jet. A rudimentary method for deducing intact-liquid-core length has been suggested. The technique of real-time X-ray radiography has been shown to be a viable approach to the study of the breakup processes of high-speed liquid jets.
Brigati, Jennifer R; Swann, Jerilyn M
2015-05-01
Incorporating peer-review steps in the laboratory report writing process provides benefits to students, but it also can create additional work for laboratory instructors. The laboratory report writing process described here allows the instructor to grade only one lab report for every two to four students, while giving the students the benefits of peer review and prompt feedback on their laboratory reports. Here we present the application of this process to a sophomore level genetics course and a freshman level cellular biology course, including information regarding class time spent on student preparation activities, instructor preparation, prerequisite student knowledge, suggested learning outcomes, procedure, materials, student instructions, faculty instructions, assessment tools, and sample data. T-tests comparing individual and group grading of the introductory cell biology lab reports yielded average scores that were not significantly different from each other (p = 0.13, n = 23 for individual grading, n = 6 for group grading). T-tests also demonstrated that average laboratory report grades of students using the peer-review process were not significantly different from those of students working alone (p = 0.98, n = 9 for individual grading, n = 6 for pair grading). While the grading process described here does not lead to statistically significant gains (or reductions) in student learning, it allows student learning to be maintained while decreasing instructor workload. This reduction in workload could allow the instructor time to pursue other high-impact practices that have been shown to increase student learning. Finally, we suggest possible modifications to the procedure for application in a variety of settings.
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Ellerbe, Laura S; Manfredi, Luisa; Gupta, Shalini; Phelps, Tyler E; Bowe, Thomas R; Rubinsky, Anna D; Burden, Jennifer L; Harris, Alex H S
2017-04-04
In the U.S. Department of Veterans Affairs (VA), residential treatment programs are an important part of the continuum of care for patients with a substance use disorder (SUD). However, a limited number of program-specific measures to identify quality gaps in SUD residential programs exist. This study aimed to: (1) Develop metrics for two pre-admission processes: Wait Time and Engagement While Waiting, and (2) Interview program management and staff about program structures and processes that may contribute to performance on these metrics. The first aim sought to supplement the VA's existing facility-level performance metrics with SUD program-level metrics in order to identify high-value targets for quality improvement. The second aim recognized that not all key processes are reflected in the administrative data, and even when they are, new insight may be gained from viewing these data in the context of day-to-day clinical practice. VA administrative data from fiscal year 2012 were used to calculate pre-admission metrics for 97 programs (63 SUD Residential Rehabilitation Treatment Programs (SUD RRTPs); 34 Mental Health Residential Rehabilitation Treatment Programs (MH RRTPs) with a SUD track). Interviews were then conducted with management and front-line staff to learn what factors may have contributed to high or low performance, relative to the national average for their program type. We hypothesized that speaking directly to residential program staff may reveal innovative practices, areas for improvement, and factors that may explain system-wide variability in performance. Average wait time for admission was 16 days (SUD RRTPs: 17 days; MH RRTPs with a SUD track: 11 days), with 60% of Veterans waiting longer than 7 days. For these Veterans, engagement while waiting occurred in an average of 54% of the waiting weeks (range 3-100% across programs). Fifty-nine interviews representing 44 programs revealed factors perceived to potentially impact performance in these domains. Efficient screening processes, effective patient flow, and available beds were perceived to facilitate shorter wait times, while lack of beds, poor staffing levels, and lengths of stay of existing patients were thought to lengthen wait times. Accessible outpatient services, strong patient outreach, and strong encouragement of pre-admission outpatient treatment emerged as facilitators of engagement while waiting; poor staffing levels, socioeconomic barriers, and low patient motivation were viewed as barriers. Metrics for pre-admission processes can be helpful for monitoring residential SUD treatment programs. Interviewing program management and staff about drivers of performance metrics can play a complementary role by identifying innovative and other strong practices, as well as high-value targets for quality improvement. Key facilitators of high-performing facilities may offer programs with lower performance useful strategies to improve specific pre-admission processes.
Wei, L; Chen, H; Zhou, Y S; Sun, Y C; Pan, S X
2017-02-18
To compare the technician fabrication time and clinical working time of custom trays fabricated using two different methods, the three-dimensional printing custom trays and the conventional custom trays, and to prove the feasibility of the computer-aided design/computer-aided manufacturing (CAD/CAM) custom trays in clinical use from the perspective of clinical time cost. Twenty edentulous patients were recruited into this study, which was prospective, single blind, randomized self-control clinical trials. Two custom trays were fabricated for each participant. One of the custom trays was fabricated using functional suitable denture (FSD) system through CAD/CAM process, and the other was manually fabricated using conventional methods. Then the final impressions were taken using both the custom trays, followed by utilizing the final impression to fabricate complete dentures respectively. The technician production time of the custom trays and the clinical working time of taking the final impression was recorded. The average time spent on fabricating the three-dimensional printing custom trays using FSD system and fabricating the conventional custom trays manually were (28.6±2.9) min and (31.1±5.7) min, respectively. The average time spent on making the final impression with the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually were (23.4±11.5) min and (25.4±13.0) min, respectively. There was significant difference in the technician fabrication time and the clinical working time between the three-dimensional printing custom trays using FSD system and the conventional custom trays fabricated manually (P<0.05). The average time spent on fabricating three-dimensional printing custom trays using FSD system and making the final impression with the trays are less than those of the conventional custom trays fabricated manually, which reveals that the FSD three-dimensional printing custom trays is less time-consuming both in the clinical and laboratory process than the conventional custom trays. In addition, when we manufacture custom trays by three-dimensional printing method, there is no need to pour preliminary cast after taking the primary impression, therefore, it can save the impression material and model material. As to completing denture restoration, manufacturing custom trays using FSD system is worth being popularized.
A Stochastic Fractional Dynamics Model of Space-time Variability of Rain
NASA Technical Reports Server (NTRS)
Kundu, Prasun K.; Travis, James E.
2013-01-01
Rainfall varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, that allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and times scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and in Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to the second moment statistics of radar data. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well without any further adjustment.
Graph transformation method for calculating waiting times in Markov chains.
Trygubenko, Semen A; Wales, David J
2006-06-21
We describe an exact approach for calculating transition probabilities and waiting times in finite-state discrete-time Markov processes. All the states and the rules for transitions between them must be known in advance. We can then calculate averages over a given ensemble of paths for both additive and multiplicative properties in a nonstochastic and noniterative fashion. In particular, we can calculate the mean first-passage time between arbitrary groups of stationary points for discrete path sampling databases, and hence extract phenomenological rate constants. We present a number of examples to demonstrate the efficiency and robustness of this approach.
NASA Astrophysics Data System (ADS)
Yamazaki, T.; Katayama, I.; Uwamino, Y.
1993-02-01
The possibility of a crossed beam facility of slow neutrons capturing unstable nuclei is examined in connection with the Japanese Hadron Project. With a pulsed proton beam of 50 Hz repetition and with a 100 μA average beam current, one obtains a spallation neutron source of 2.4 × 10 8 thermal neutrons/cm 3/spill over a 60 cm length with a 3 ms average duration time by using a D 2O moderator. By confining radioactive nuclei of 10 9 ions in a beam circulation ring of 0.3 MHz revolution frequency, so that nuclei pass through the neutron source, one obtains a collision luminosity of 3.9 × 10 24/cm 2/s. A new research domain aimed at studying rapid processes in nuclear genetics in a laboratory will be created.
Acephate and buprofezin residues in olives and olive oil.
Cabras, P; Angioni, A; Garau, V L; Pirisi, F M; Cabitza, F; Pala, M
2000-10-01
Field trials were carried out to study the persistence of acephate and buprofezin on olives. Two cultivars, pizz'e carroga and pendolino, with very large and small fruits respectively were used. After treatment, no difference was found between the two pesticide deposits on the olives. The disappearance rates, calculated as pseudo first order kinetics, were similar for both pesticides (on average 12 days). Methamidophos, the acephate metabolite, was always present on all olives, and in some pendolino samples it showed higher residues than the maximum residue limit (MRL). During washing, the first step of olive processing, the residue level of both pesticides on the olives did not decrease. After processing of the olives into oil, no residues of acephate or methamidophos were found in the olive oil, while the residues of buprofezin were on average four times higher than on olives.
Talvitie, Julia; Heinonen, Mari; Pääkkönen, Jari-Pekka; Vahtera, Emil; Mikola, Anna; Setälä, Outi; Vahala, Riku
2015-01-01
This study on the removal of microplastics during different wastewater treatment unit processes was carried out at Viikinmäki wastewater treatment plant (WWTP). The amount of microplastics in the influent was high, but it decreased significantly during the treatment process. The major part of the fibres were removed already in primary sedimentation whereas synthetic particles settled mostly in secondary sedimentation. Biological filtration further improved the removal. A proportion of the microplastic load also passed the treatment and was found in the effluent, entering the receiving water body. After the treatment process, an average of 4.9 (±1.4) fibres and 8.6 (±2.5) particles were found per litre of wastewater. The total textile fibre concentration in the samples collected from the surface waters in the Helsinki archipelago varied between 0.01 and 0.65 fibres per litre, while the synthetic particle concentration varied between 0.5 and 9.4 particles per litre. The average fibre concentration was 25 times higher and the particle concentration was three times higher in the effluent compared to the receiving body of water. This indicates that WWTPs may operate as a route for microplastics entering the sea.
Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang
2016-01-01
Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214
Escape and finite-size scaling in diffusion-controlled annihilation
Ben-Naim, Eli; Krapivsky, Paul L.
2016-12-16
In this paper, we study diffusion-controlled single-species annihilation with a finite number of particles. In this reaction-diffusion process, each particle undergoes ordinary diffusion, and when two particles meet, they annihilate. We focus on spatial dimensions d>2 where a finite number of particles typically survive the annihilation process. Using scaling techniques we investigate the average number of surviving particles, M, as a function of the initial number of particles, N. In three dimensions, for instance, we find the scaling law M ~ N 1/3 in the asymptotic regime N»1. We show that two time scales govern the reaction kinetics: the diffusionmore » time scale, T ~ N 2/3, and the escape time scale, τ ~ N 4/3. The vast majority of annihilation events occur on the diffusion time scale, while no annihilation events occur beyond the escape time scale.« less
Smart Projectiles: Design Guidelines and Development Process Keys to Success
2010-10-01
0704-01-0188 The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for...program the authors have worked on, this is always incorrect. Once again there is an adage that "there is never enough time or money to do it right, but...there is always enough time and money to do it over." At least one program the authors worked on was terminated because it "did it over" one too many
NASA Astrophysics Data System (ADS)
Agustin, Eny Widhia; Hangga, Arimaz; Fahrian, Muhammad Iqbal; Azhari, Anis Fikri
2018-03-01
The implementation of monitoring system in the facial acupressure learning media could increase the students' proficiency. However the common learning media still has not implemented a monitoring system in their learning process. This research was conducted to implement monitoring system in the mannequin head prototype as a learning media of facial acupressure using Bluetooth, wireless and Ethernet. The results of the implementation of monitoring system in the prototype showed that there were differences in the delay time between Bluetooth and wireless or Ethernet. The results data showed no difference in the average delay time between the use of Bluetooth with wireless and the use of Bluetooth with Ethernet in monitoring system of facial acupressure learning media. From all the facial acupressure points, the forehead facial acupressure point has the longest delay time of 11.93 seconds. The average delay time in all 3 class rooms was 1.96 seconds therefore the use of Bluetooth, wireless and Ethernet is highly recommended in the monitoring system of facial acupressure.
Free Oscillations of the Facula Node at the Stage of Slow Dissipation
NASA Astrophysics Data System (ADS)
Solov'ev, A. A.; Kirichek, E. A.; Efremov, V. I.
2017-12-01
A solar faculae having an appearance of quite long-lived magnetic nodes can perform (as well as sunspots, chromospheric filaments, coronal loops) free oscillations, i.e., they can oscillate about the stable equilibrium position as a single whole, changing quasi-periodically magnetic field averaged over the section with periods from 1 to 4 hours. Kolotkov et al. (2017) described the case in which the average magnetic field strength of the facula node considerably decreased during observations of SDO magnetograms (13 hours), and, at the same time, its oscillations acquired a specific character: the fundamental mode of free oscillations of the facula considerably increased in amplitude (by approximately two times), while the period of oscillations increased by three times. At the end of the process, the system dissipated. In this work, we present the exact solution of the equation of small-amplitude oscillations of the system with a time-variable rigidity, describing the oscillation behavior at which the elasticity of the system decreases with time, while the period and amplitude of oscillations grow.
Recursive processes in self-affirmation: intervening to close the minority achievement gap.
Cohen, Geoffrey L; Garcia, Julio; Purdie-Vaughns, Valerie; Apfel, Nancy; Brzustoski, Patricia
2009-04-17
A 2-year follow-up of a randomized field experiment previously reported in Science is presented. A subtle intervention to lessen minority students' psychological threat related to being negatively stereotyped in school was tested in an experiment conducted three times with three independent cohorts (N = 133, 149, and 134). The intervention, a series of brief but structured writing assignments focusing students on a self-affirming value, reduced the racial achievement gap. Over 2 years, the grade point average (GPA) of African Americans was, on average, raised by 0.24 grade points. Low-achieving African Americans were particularly benefited. Their GPA improved, on average, 0.41 points, and their rate of remediation or grade repetition was less (5% versus 18%). Additionally, treated students' self-perceptions showed long-term benefits. Findings suggest that because initial psychological states and performance determine later outcomes by providing a baseline and initial trajectory for a recursive process, apparently small but early alterations in trajectory can have long-term effects. Implications for psychological theory and educational practice are discussed.
Atmospheric Diabatic Heating in Different Weather States and the General Circulation
NASA Technical Reports Server (NTRS)
Rossow, William B.; Zhang, Yuanchong; Tselioudis, George
2016-01-01
Analysis of multiple global satellite products identifies distinctive weather states of the atmosphere from the mesoscale pattern of cloud properties and quantifies the associated diabatic heating/cooling by radiative flux divergence, precipitation, and surface sensible heat flux. The results show that the forcing for the atmospheric general circulation is a very dynamic process, varying strongly at weather space-time scales, comprising relatively infrequent, strong heating events by ''stormy'' weather and more nearly continuous, weak cooling by ''fair'' weather. Such behavior undercuts the value of analyses of time-averaged energy exchanges in observations or numerical models. It is proposed that an analysis of the joint time-related variations of the global weather states and the general circulation on weather space-time scales might be used to establish useful ''feedback like'' relationships between cloud processes and the large-scale circulation.
[Cleavage time for a hydrogen bond under a load].
Bespalov, S V; Tolpygo, K B
1993-01-01
Statistics of the hydrogen bond formation and break in a bundle of actin and myosin filaments realizing the attractive force in the sarcomere of a muscle is studied. Purely mechanical problem of the attractive-force formation and motion of myosin heads and action globules under their action is supplemented by accounting for the irreversible processes: 1. Thermal de-excitation of the latter in the chain of hydrogen bond during the elementary act of the ATP energy use resulting in fixing the extended actin filament. 2. Break of the hydrogen bonds, realizing this fixing, due to thermal fluctuations for the time tau. The average life-time turns out to be the order of time necessary for the movement of z-membrane sarcomere for the value of action filament extension delta 1, which is necessary for the process of muscle contraction to be continued.
Establishment and assessment of an integrated citric acid-methane production process.
Xu, Jian; Chen, Yang-Qiu; Zhang, Hong-Jian; Bao, Jia-Wei; Tang, Lei; Wang, Ke; Zhang, Jian-Hua; Chen, Xu-Sheng; Mao, Zhong-Gui
2015-01-01
To solve the problem of extraction wastewater in citric acid industrial production, an improved integrated citric acid-methane production process was established in this study. Extraction wastewater was treated by anaerobic digestion and then the anaerobic digestion effluent (ADE) was stripped by air to remove ammonia. Followed by solid-liquid separation to remove metal ion precipitation, the supernatant was recycled for the next batch of citric acid fermentation, thus eliminating wastewater discharge and reducing water consumption. 130U/g glucoamylase was added to medium after inoculation and the recycling process performed for 10 batches. Fermentation time decreased by 20% in recycling and the average citric acid production (2nd-10th) was 145.9±3.4g/L, only 2.5% lower than that with tap water (149.6g/L). The average methane production was 292.3±25.1mL/g CODremoved and stable in operation. Excessive Na(+) concentration in ADE was confirmed to be the major challenge for the proposed process. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films.
de Carvalho, Fabiano Pachêco; da Silveira, M M F; Frazão, M A G; de Santana, S T; dos Anjos Pontual, M L
2011-09-01
The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction.
Effects of developer exhaustion on DFL Contrast FV-58 and Kodak Insight dental films
de Carvalho, FP; da Silveira, MMF; Frazão, MAG; de Santana, ST; dos Anjos Pontual, ML
2011-01-01
Objectives The aim of this study was to compare the properties of the DFL Contrast FV-58 F-speed film (DFL Co., Rio de Janerio, Brazil) with the Kodak Insight E/F speed film (Eastman Kodak, Rochester, NY) in fresh and exhausted processing solutions. The parameters studied were the speed, average gradient and latitude. Methods Five samples of each type of film were exposed under standardized conditions over 5 weeks. The films were developed in fresh and progressively exhausted processing solutions. Characteristic curves were constructed from values of optical density and radiation dose and were used to calculate the parameters. An analysis of variance was performed separately for film type and time. Results DFL Contrast FV-58 film has a speed and average gradient that is significantly higher than Insight film, whereas the values of latitude are lower. Exhausted processing solutions were not significant in the parameters studied. Conclusion DFL Contrast FV-58 film has stable properties when exhausted manual processing solutions are used and can be recommended for use in dental practice, contributing to dose reduction. PMID:21831975
Method and apparatus for improved wire saw slurry
Costantini, Michael A.; Talbott, Jonathan A.; Chandra, Mohan; Prasad, Vishwanath; Caster, Allison; Gupta, Kedar P.; Leyvraz, Philippe
2000-09-05
A slurry recycle process for use in free-abrasive machining operations such as for wire saws used in wafer slicing of ingots, where the used slurry is separated into kerf-rich and abrasive-rich components, and the abrasive-rich component is reconstituted into a makeup slurry. During the process, the average particle size of the makeup slurry is controlled by monitoring the condition of the kerf and abrasive components and making necessary adjustments to the separating force and dwell time of the separator apparatus. Related pre-separator and post separator treatments, and feedback of one or the other separator slurry output components for mixing with incoming used slurry and recirculation through the separator, provide further effectiveness and additional control points in the process. The kerf-rich component is eventually or continually removed; the abrasive-rich component is reconstituted into a makeup slurry with a controlled, average particle size such that the products of the free-abrasive machining method using the recycled slurry process of the invention are of consistent high quality with less TTV deviation from cycle to cycle for a prolonged period or series of machining operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cavazos-Cadena, Rolando, E-mail: rcavazos@uaaan.m; Salem-Silva, Francisco, E-mail: frsalem@uv.m
2010-04-15
This note concerns discrete-time controlled Markov chains with Borel state and action spaces. Given a nonnegative cost function, the performance of a control policy is measured by the superior limit risk-sensitive average criterion associated with a constant and positive risk sensitivity coefficient. Within such a framework, the discounted approach is used (a) to establish the existence of solutions for the corresponding optimality inequality, and (b) to show that, under mild conditions on the cost function, the optimal value functions corresponding to the superior and inferior limit average criteria coincide on a certain subset of the state space. The approach ofmore » the paper relies on standard dynamic programming ideas and on a simple analytical derivation of a Tauberian relation.« less
Multifractal analysis of time series generated by discrete Ito equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Telesca, Luciano; Czechowski, Zbigniew; Lovallo, Michele
2015-06-15
In this study, we show that discrete Ito equations with short-tail Gaussian marginal distribution function generate multifractal time series. The multifractality is due to the nonlinear correlations, which are hidden in Markov processes and are generated by the interrelation between the drift and the multiplicative stochastic forces in the Ito equation. A link between the range of the generalized Hurst exponents and the mean of the squares of all averaged net forces is suggested.
Large zeolites - Why and how to grow in space
NASA Technical Reports Server (NTRS)
Sacco, Albert, Jr.
1991-01-01
The growth of zeolite crystals which are considered to be the most valuable catalytic and adsorbent materials of the chemical processing industry are discussed. It is proposed to use triethanolamine as a nucleation control agent to control the time release of Al in a zeolite A solution and to increase the average and maximum crystal size by 25-50 times. Large zeolites could be utilized to make membranes for reactors/separators which will substantially increase their efficiency.
Exposure of welders and other metal workers to ELF magnetic fields.
Skotte, J H; Hjøllund, H I
1997-01-01
This study assessed exposure to extremely low frequency (ELF) magnetic fields of welders and other metal workers and compared exposure from different welding processes. Exposure to ELF magnetic fields was measured for 50 workers selected from a nationwide cohort of metal workers and 15 nonrandomly selected full-time welders in a shipyard. The measurements were carried out with personal exposure meters during 3 days of work for the metal workers and I day of work for the shipyard welders. To record a large dynamic range of ELF magnetic field values, the measurements were carried out with "high/low" pairs of personal exposure meters. Additional measurements of static magnetic fields at fixed positions close to welding installations were done with a Hall-effect fluxmeter. The total time of measurement was 1273 hours. The metal workers reported welding activity for 5.8% of the time, and the median of the work-period mean exposure to ELF magnetic fields was 0.18 microT. DC metal inert or active gas welding (MIG/MAG) was used 80% of the time for welding, and AC manual metal arc welding (MMA) was used 10% of the time. The shipyard welders reported welding activity for 56% of the time, and the median and maximum of the workday mean exposure to ELF magnetic fields was 4.70 and 27.5 microT, respectively. For full-shift welders the average workday mean was 21.2 microT for MMA welders and 2.3 microT for MIG/MAG welders. The average exposure during the effective time of welding was estimated to be 65 microT for the MMA welding process and 7 microT for the MIG/MAG welding process. The time of exposure above 1 microT was found to be a useful measure of the effective time of welding. Large differences in exposure to ELF magnetic fields were found between different groups of welders, depending on the welding process and effective time of welding. MMA (AC) welding caused roughly 10 times higher exposure to ELF magnetic fields compared with MIG/MAG (DC) welding. The measurements of static fields suggest that the combined exposure to static and ELF fields of MIG/MAG (DC) welders and the exposure to ELF fields of MMA (AC) welders are roughly of the same level.
Code of Federal Regulations, 2010 CFR
2010-07-01
... averages into the appropriate averaging times and units? 60.2943 Section 60.2943 Protection of Environment... SOURCES Operator Training and Qualification Monitoring § 60.2943 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate...
Code of Federal Regulations, 2012 CFR
2012-07-01
... averages into the appropriate averaging times and units? 60.2943 Section 60.2943 Protection of Environment... SOURCES Operator Training and Qualification Monitoring § 60.2943 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate...
Code of Federal Regulations, 2011 CFR
2011-07-01
... averages into the appropriate averaging times and units? 60.2943 Section 60.2943 Protection of Environment... SOURCES Operator Training and Qualification Monitoring § 60.2943 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.2975 to calculate...
The Association Between childhood Obesity and Tooth Eruption
Must, Aviva; Phillips, Sarah M.; Tybor, David J.; Lividini, Keith; Hayes, Catherine
2013-01-01
Obesity is a growth-promoting process as evidenced by its effect on the timing of puberty. Although studies are limited, obesity has been shown to affect the timing of tooth eruption. Both the timing and sequence of tooth eruption are important to overall oral health. The purpose of this study was to examine the association between obesity and tooth eruption. Data were combined from three consecutive cycles (2001–2006) of the National Health and Nutrition Examination Survey (NHANES) and analyzed to examine associations between the number of teeth erupted (NET) and obesity status (BMI z-score >95th percentile BMI relative to the Centers for Disease Control and Prevention (CDC) growth reference) among children 5 up to 14 years of age, controlling for potential confounding by age, gender, race, and socioeconomic status (SES). Obesity is significantly associated with having a higher average NET during the mixed dentition period. On average, teeth of obese children erupted earlier than nonobese children with obese children having on average 1.44 more teeth erupted than nonobese children, after adjusting for age, gender, and race/ethnicity (P < 0.0001). SES was not a confounder of the observed associations. Obese children, on average, have significantly more teeth erupted than nonobese children after adjusting for gender, age, and race. These findings may have clinical importance in the area of dental and orthodontic medicine both in terms of risk for dental caries due to extended length of time exposed in the oral cavity and sequencing which may increase the likelihood of malocclusions. PMID:22310231
Chadsuthi, Sudarat; Iamsirithaworn, Sopon; Triampo, Wannapong; Modchang, Charin
2015-01-01
Influenza is a worldwide respiratory infectious disease that easily spreads from one person to another. Previous research has found that the influenza transmission process is often associated with climate variables. In this study, we used autocorrelation and partial autocorrelation plots to determine the appropriate autoregressive integrated moving average (ARIMA) model for influenza transmission in the central and southern regions of Thailand. The relationships between reported influenza cases and the climate data, such as the amount of rainfall, average temperature, average maximum relative humidity, average minimum relative humidity, and average relative humidity, were evaluated using cross-correlation function. Based on the available data of suspected influenza cases and climate variables, the most appropriate ARIMA(X) model for each region was obtained. We found that the average temperature correlated with influenza cases in both central and southern regions, but average minimum relative humidity played an important role only in the southern region. The ARIMAX model that includes the average temperature with a 4-month lag and the minimum relative humidity with a 2-month lag is the appropriate model for the central region, whereas including the minimum relative humidity with a 4-month lag results in the best model for the southern region.
Averaging Theory for Description of Environmental Problems: What Have We Learned?
Miller, Cass T.; Schrefler, Bernhard A.
2012-01-01
Advances in Water Resources has been a prime archival source for implementation of averaging theories in changing the scale at which processes of importance in environmental modeling are described. Thus in celebration of the 35th year of this journal, it seems appropriate to assess what has been learned about these theories and about their utility in describing systems of interest. We review advances in understanding and use of averaging theories to describe porous medium flow and transport at the macroscale, an averaged scale that models spatial variability, and at the megascale, an integral scale that only considers time variation of system properties. We detail physical insights gained from the development and application of averaging theory for flow through porous medium systems and for the behavior of solids at the macroscale. We show the relationship between standard models that are typically applied and more rigorous models that are derived using modern averaging theory. We discuss how the results derived from averaging theory that are available can be built upon and applied broadly within the community. We highlight opportunities and needs that exist for collaborations among theorists, numerical analysts, and experimentalists to advance the new classes of models that have been derived. Lastly, we comment on averaging developments for rivers, estuaries, and watersheds. PMID:23393409
NASA Astrophysics Data System (ADS)
Lenoir, Guillaume; Crucifix, Michel
2018-03-01
Geophysical time series are sometimes sampled irregularly along the time axis. The situation is particularly frequent in palaeoclimatology. Yet, there is so far no general framework for handling the continuous wavelet transform when the time sampling is irregular. Here we provide such a framework. To this end, we define the scalogram as the continuous-wavelet-transform equivalent of the extended Lomb-Scargle periodogram defined in Part 1 of this study (Lenoir and Crucifix, 2018). The signal being analysed is modelled as the sum of a locally periodic component in the time-frequency plane, a polynomial trend, and a background noise. The mother wavelet adopted here is the Morlet wavelet classically used in geophysical applications. The background noise model is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, which is more general than the traditional Gaussian white and red noise processes. The scalogram is smoothed by averaging over neighbouring times in order to reduce its variance. The Shannon-Nyquist exclusion zone is however defined as the area corrupted by local aliasing issues. The local amplitude in the time-frequency plane is then estimated with least-squares methods. We also derive an approximate formula linking the squared amplitude and the scalogram. Based on this property, we define a new analysis tool: the weighted smoothed scalogram, which we recommend for most analyses. The estimated signal amplitude also gives access to band and ridge filtering. Finally, we design a test of significance for the weighted smoothed scalogram against the stationary Gaussian CARMA background noise, and provide algorithms for computing confidence levels, either analytically or with Monte Carlo Markov chain methods. All the analysis tools presented in this article are available to the reader in the Python package WAVEPAL.
Development of a Real-Time Pulse Processing Algorithm for TES-Based X-Ray Microcalorimeters
NASA Technical Reports Server (NTRS)
Tan, Hui; Hennig, Wolfgang; Warburton, William K.; Doriese, W. Bertrand; Kilbourne, Caroline A.
2011-01-01
We report here a real-time pulse processing algorithm for superconducting transition-edge sensor (TES) based x-ray microcalorimeters. TES-based. microca1orimeters offer ultra-high energy resolutions, but the small volume of each pixel requires that large arrays of identical microcalorimeter pixe1s be built to achieve sufficient detection efficiency. That in turn requires as much pulse processing as possible must be performed at the front end of readout electronics to avoid transferring large amounts of data to a host computer for post-processing. Therefore, a real-time pulse processing algorithm that not only can be implemented in the readout electronics but also achieve satisfactory energy resolutions is desired. We have developed an algorithm that can be easily implemented. in hardware. We then tested the algorithm offline using several data sets acquired with an 8 x 8 Goddard TES x-ray calorimeter array and 2x16 NIST time-division SQUID multiplexer. We obtained an average energy resolution of close to 3.0 eV at 6 keV for the multiplexed pixels while preserving over 99% of the events in the data sets.
Production of zinc and manganese oxide particles by pyrolysis of alkaline and Zn-C battery waste.
Ebin, Burçak; Petranikova, Martina; Steenari, Britt-Marie; Ekberg, Christian
2016-05-01
Production of zinc and manganese oxide particles from alkaline and zinc-carbon battery black mass was studied by a pyrolysis process at 850-950°C with various residence times under 1L/minN2(g) flow rate conditions without using any additive. The particular and chemical properties of the battery waste were characterized to investigate the possible reactions and effects on the properties of the reaction products. The thermodynamics of the pyrolysis process were studied using the HSC Chemistry 5.11 software. The carbothermic reduction reaction of battery black mass takes place and makes it possible to produce fine zinc particles by a rapid condensation, after the evaporation of zinc from a pyrolysis batch. The amount of zinc that can be separated from the black mass is increased by both pyrolysis temperature and residence time. Zinc recovery of 97% was achieved at 950°C and 1h residence time using the proposed alkaline battery recycling process. The pyrolysis residue is mainly MnO powder with a low amount of zinc, iron and potassium impurities and has an average particle size of 2.9μm. The obtained zinc particles have an average particle size of about 860nm and consist of hexagonal crystals around 110nm in size. The morphology of the zinc particles changes from a hexagonal shape to s spherical morphology by elevating the pyrolysis temperature. Copyright © 2015 Elsevier Ltd. All rights reserved.
Zhang, Yan; Wang, Yong-sheng; Bai, Yu-hua; Chen, Chen; Lü, Jian; Zhang, Jie
2007-10-01
Swimming bed combined with aerobic granular sludge as a novel technology for wastewater treatment was developed, which was on the basis of the biofilm process and activated sludge process, and results demonstrated notable performance of high-efficiency treatment capability and sludge reduction. Even when hydraulic retention time (HRT) was only at 3.2 h with average COD volumetric loading of 2.03 kg/(m3 x d) and NH4(+)-N of 0.52 kg/(m3 X d), 90.9% of average COD removal rate and 98.3% of NH4(+)-N removal rate were achieved. Aerobic granular sludge appeared with spherical or rod shape after 16 days operation. Mixed liquor suspended solid (MLSS) concentrations in the reactor reached 5,640 mg/L at the highest during operation period, and the average ratio of mixed liquor volatile suspended solid (MLVSS) to MLSS reached 0.87. Furthermore, microscopic observation of biofilm and aerobic granules revealed much presence of protozoa and metazoa on the biofilm and suspended sludge, and this long food chain can contribute to the sludge reduction. Only 0. 175 5 of sludge yields (MLSS/ CODremoved) was obtained in the experiment, which was only about 50% of the conventional aerobic processes.
Operating efficiency of an emergency Burns theatre: An eight month analysis.
Mohan, Arvind; Lutterodt, Christopher; Leon-Villapalos, Jorge
2017-11-01
The efficient use of operating theatres is important to insure optimum cost-benefit for the hospital. We used the emergency Burns theatre as a model to assess theatre efficiency at our institution. Data was collected retrospectively on every operation performed in the Burns theatre between 01/04/15 and 30/11/15. Each component of the operating theatre process was considered and integrated to calculate values for surgical/anaesthetic time, changeover time and ultimately theatre efficiency. A total of 426 operations were carried out over 887h of allocated theatre time (ATT). Actual operating time represented 67.7%, anaesthetic time 8.8% and changeover time 14.2% of ATT. The average changeover time between patients was 30.1min. Lists started on average 27.7min late each day. There were a total of 5.8h of overruns and 9.6h of no useful activity. Operating theatre efficiency was 69.3% for the 8 month period. Our study highlights areas where theatre efficiency can be improved. We suggest various strategies to improve this that may be applied universally. Copyright © 2017 Elsevier Ltd and ISBI. All rights reserved.
Development of a New Time-Resolved Laser-Induced Fluorescence Technique
NASA Astrophysics Data System (ADS)
Durot, Christopher; Gallimore, Alec
2012-10-01
We are developing a time-resolved laser-induced fluorescence (LIF) technique to interrogate the ion velocity distribution function (VDF) of EP thruster plumes down to the microsecond time scale. Better measurements of dynamic plasma processes will lead to improvements in simulation and prediction of thruster operation and erosion. We present the development of the new technique and results of initial tests. Signal-to-noise ratio (SNR) is often a challenge for LIF studies, and it is only more challenging for time-resolved measurements since a lock-in amplifier cannot be used with a long time constant. The new system uses laser modulation on the order of MHz, which enables the use of electronic filtering and phase-sensitive detection to improve SNR while preserving time-resolved information. Statistical averaging over many cycles to further improve SNR is done in the frequency domain. This technique can have significant advantages, including (1) larger spatial maps enabled by shorter data acquisition time and (2) the ability to average data without creating a phase reference by modifying the thruster operating condition with a periodic cutoff in discharge current, which can modify the ion velocity distribution.
Land processes lead to surprising patterns in atmospheric residence time
NASA Astrophysics Data System (ADS)
van der Ent, R.; Tuinenburg, O.
2017-12-01
Our research using atmospheric moisture tracking methods shows that the global average atmospheric residence time of evaporation is 8-10 days. This residence time appears to be Gamma distributed with a higher probability of shorter than average residence times and a long tail. As a consequence the median of this residence time is around 5 days. In some places in the world the first few hours/days after evaporation there seems to be a little chance for a moisture particle to precipitate again, which is reflected by a Gamma distribution having a shape parameter below 1. In this study we present global maps of this parameter using different datasets (GLDAS and ERA-Interim). The shape parameter is as such also a measure for the land-atmospheric coupling strength along the path of the atmospheric water particle. We also find that different evaporation components: canopy interception, soil evaporation and transpiration appear to have different residence time distributions. We find a daily cycle in the residence time distribution over land, which is not present over the oceans. In this paper we will show which of the evaporation components is mainly responsible for this daily pattern and thus exhibits the largest daily cycle of land-atmosphere coupling strength.
NASA Astrophysics Data System (ADS)
Yin, Lucy; Andrews, Jennifer; Heaton, Thomas
2018-05-01
Earthquake parameter estimations using nearest neighbor searching among a large database of observations can lead to reliable prediction results. However, in the real-time application of Earthquake Early Warning (EEW) systems, the accurate prediction using a large database is penalized by a significant delay in the processing time. We propose to use a multidimensional binary search tree (KD tree) data structure to organize large seismic databases to reduce the processing time in nearest neighbor search for predictions. We evaluated the performance of KD tree on the Gutenberg Algorithm, a database-searching algorithm for EEW. We constructed an offline test to predict peak ground motions using a database with feature sets of waveform filter-bank characteristics, and compare the results with the observed seismic parameters. We concluded that large database provides more accurate predictions of the ground motion information, such as peak ground acceleration, velocity, and displacement (PGA, PGV, PGD), than source parameters, such as hypocenter distance. Application of the KD tree search to organize the database reduced the average searching process by 85% time cost of the exhaustive method, allowing the method to be feasible for real-time implementation. The algorithm is straightforward and the results will reduce the overall time of warning delivery for EEW.
Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad
2015-10-01
A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.
Steiber, Nadia
2015-01-01
This paper uses individual-level data from the German Socio-Economic Panel to model trends in population health in terms of cognition, physical fitness, and mental health between 2006 and 2012. The focus is on the population aged 50–90. We use a repeated population-based cross-sectional design. As outcome measures, we use SF-12 measures of physical and mental health and the Symbol-Digit Test (SDT) that captures cognitive processing speed. In line with previous research we find a highly significant Flynn effect on cognition; i.e., SDT scores are higher among those who were tested more recently (at the same age). This result holds for men and women, all age groups, and across all levels of education. While we observe a secular improvement in terms of cognitive functioning, at the same time, average physical and mental health has declined. The decline in average physical health is shown to be stronger for men than for women and found to be strongest for low-educated, young-old men aged 50–64: the decline over the 6-year interval in average physical health is estimated to amount to about 0.37 SD, whereas average fluid cognition improved by about 0.29 SD. This pattern of results at the population-level (trends in average population health) stands in interesting contrast to the positive association of physical health and cognitive functioning at the individual-level. The findings underscore the multi-dimensionality of health and the aging process. PMID:26323093
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2009-10-15
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 ? 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 ? 2,208) and for the Tropical Western Pacific (TWP) locale is 1,876.8 hours (0.85 ? 2,208). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 2,097.60 hours (0.95 x 2,208). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive result from downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for the period July 1 - September 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and not included in the aggregate average with the fixed sites. The fourth quarter comprises a total of 2,208 hours for the fixed and mobile sites. The average of the fixed sites well exceeded our goal this quarter. The AMF data statistic requires explanation. Since the AMF radar data ingest software is being modified, the data are being stored in the DMF for data processing. Hence, the data are not at the Archive; they are anticipated to become available by the next report.« less
Photoinduced diffusion molecular transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien
2016-08-14
We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less
Automated Quantification of Pneumothorax in CT
Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer
2012-01-01
An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091
Tracer-Test Planning Using the Efficient Hydrologic Tracer ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to be
EFFICIENT HYDROLOGICAL TRACER-TEST DESIGN (EHTD ...
Hydrological tracer testing is the most reliable diagnostic technique available for establishing flow trajectories and hydrologic connections and for determining basic hydraulic and geometric parameters necessary for establishing operative solute-transport processes. Tracer-test design can be difficult because of a lack of prior knowledge of the basic hydraulic and geometric parameters desired and the appropriate tracer mass to release. A new efficient hydrologic tracer-test design (EHTD) methodology has been developed that combines basic measured field parameters (e.g., discharge, distance, cross-sectional area) in functional relationships that describe solute-transport processes related to flow velocity and time of travel. The new method applies these initial estimates for time of travel and velocity to a hypothetical continuously stirred tank reactor as an analog for the hydrologic flow system to develop initial estimates for tracer concentration and axial dispersion, based on a preset average tracer concentration. Root determination of the one-dimensional advection-dispersion equation (ADE) using the preset average tracer concentration then provides a theoretical basis for an estimate of necessary tracer mass.Application of the predicted tracer mass with the hydraulic and geometric parameters in the ADE allows for an approximation of initial sample-collection time and subsequent sample-collection frequency where a maximum of 65 samples were determined to
Code of Federal Regulations, 2012 CFR
2012-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2010 CFR
2010-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2013 CFR
2013-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2011 CFR
2011-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
Code of Federal Regulations, 2014 CFR
2014-07-01
... averages into the appropriate averaging times and units? 60.3042 Section 60.3042 Protection of Environment... Construction On or Before December 9, 2004 Model Rule-Monitoring § 60.3042 How do I convert my 1-hour arithmetic averages into the appropriate averaging times and units? (a) Use Equation 1 in § 60.3076 to...
NASA Astrophysics Data System (ADS)
Tang, Dunbing; Dai, Min
2015-09-01
The traditional production planning and scheduling problems consider performance indicators like time, cost and quality as optimization objectives in manufacturing processes. However, environmentally-friendly factors like energy consumption of production have not been completely taken into consideration. Against this background, this paper addresses an approach to modify a given schedule generated by a production planning and scheduling system in a job shop floor, where machine tools can work at different cutting speeds. It can adjust the cutting speeds of the operations while keeping the original assignment and processing sequence of operations of each job fixed in order to obtain energy savings. First, the proposed approach, based on a mixed integer programming mathematical model, changes the total idle time of the given schedule to minimize energy consumption in the job shop floor while accepting the optimal solution of the scheduling objective, makespan. Then, a genetic-simulated annealing algorithm is used to explore the optimal solution due to the fact that the problem is strongly NP-hard. Finally, the effectiveness of the approach is performed smalland large-size instances, respectively. The experimental results show that the approach can save 5%-10% of the average energy consumption while accepting the optimal solution of the makespan in small-size instances. In addition, the average maximum energy saving ratio can reach to 13%. And it can save approximately 1%-4% of the average energy consumption and approximately 2.4% of the average maximum energy while accepting the near-optimal solution of the makespan in large-size instances. The proposed research provides an interesting point to explore an energy-aware schedule optimization for a traditional production planning and scheduling problem.
Scale-invariant Green-Kubo relation for time-averaged diffusivity
NASA Astrophysics Data System (ADS)
Meyer, Philipp; Barkai, Eli; Kantz, Holger
2017-12-01
In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.
Roy, Debjit; Mandal, Saptarshi; De, Chayan K; Kumar, Kaushalendra; Mandal, Prasun K
2018-04-18
CdSe-based core/gradient alloy shell/shell semiconductor quantum dots (CGASS QDs) have been shown to be optically quite superior compared to core-shell QDs. However, very little is known about CGASS QDs at the single particle level. Photoluminescence blinking dynamics of four differently emitting (blue (λem = 510), green (λem = 532), orange (λem = 591), and red (λem = 619)) single CGASS QDs having average sizes <∼7 nm have been probed in our home-built total internal reflection fluorescence (TIRF) microscope. All four samples possess an average ON-fraction of 0.70-0.85, which hints towards nearly suppressed PL blinking in these gradiently alloyed systems. Suppression of blinking has been so far achieved with QDs having sizes greater than 10 nm and mostly emitting in the red region (λem > 600 nm). In this manuscript, we report nearly suppressed PL blinking behaviour of CGASS QDs with average sizes <∼7 nm and emitting in the entire range of the visible spectrum, i.e. from blue to green to orange to red. The probability density distribution of both ON- and OFF-event durations for all of these CGASS QDs could be fitted well with a modified inverse truncated power law with an additional exponential model equation. It has been found that unlike most of the literature reports, the power law exponent for OFF-event durations is greater than the power law exponent for ON-event durations for all four samples. This suggests that relatively large ON-event durations are interrupted by comparatively small OFF-event durations. This in turn is indicative of a suppressed non-radiative Auger recombination process for these CGASS systems. However, in these four different samples the ON-event truncation time varies inversely with the OFF-event truncation time, which hints that both the ON- and OFF-event truncation processes are dictated by some common factor. We have employed 2D joint probability distribution analysis to probe the correlation between the event durations and found that residual memory exists in both the ON- and OFF-event durations. Positively correlated successive ON-ON and OFF-OFF event durations and negatively correlated (anti-correlated) ON-OFF event durations perhaps suggest the involvement of more than one type of trapping process within the blinking framework. The timescale corresponding to the additional exponential term has been assigned to hole trapping for ON-event duration statistics. Similarly, for OFF-event duration statistics, this component suggests hole detrapping. We found that the average duration of the exponential process for the ON-event durations is an order of magnitude higher than that of the OFF-event durations. This indicates that the holes are trapped for a significantly long time. When electron trapping is followed by such a hole trapping, long ON-event durations result. We have observed long ON-event durations, as high as 50 s. The competing charge tunnelling model has been used to account for the observed blinking behaviour in these CGASS QDs. Quite interestingly, the PLQY of all of these differently emitting QDs (an ensemble level property) could be correlated with the truncation time (a property at the single particle level). A respective concomitant increase-decrease of ON-OFF event truncation times with increasing PLQY is also indicative of a varying degree of suppression of the Auger recombination processes in these four different CGASS QDs.
Tiernan, Joan E.
1990-01-01
Highly concentrated and toxic petroleum-based and synthetic fuels wastewaters such as oil shale retort water are treated in a unit treatment process by electrolysis in a reactor containing oleophilic, ionized, open-celled polyurethane foams and subjected to mixing and laminar flow conditions at an average detention time of six hours. Both the polyurethane foams and the foam regenerate solution are re-used. The treatment is a cost-effective process for waste-waters which are not treatable, or are not cost-effectively treatable, by conventional process series.
NASA Astrophysics Data System (ADS)
Princz, S.; Wenzel, U.; Miller, R.; Hessling, M.
2014-11-01
One aerobic and four anaerobic batch fermentations of the yeast Saccharomyces cerevisiae were conducted in a stirred bioreactor and monitored inline by NIR spectroscopy and a transflectance dip probe. From the acquired NIR spectra, chemometric partial least squares regression (PLSR) models for predicting biomass, glucose and ethanol were constructed. The spectra were directly measured in the fermentation broth and successfully inspected for adulteration using our novel data pre-processing method. These adulterations manifested as strong fluctuations in the shape and offset of the absorption spectra. They resulted from cells, cell clusters, or gas bubbles intercepting the optical path of the dip probe. In the proposed data pre-processing method, adulterated signals are removed by passing the time-scanned non-averaged spectra through two filter algorithms with a 5% quantile cutoff. The filtered spectra containing meaningful data are then averaged. A second step checks whether the whole time scan is analyzable. If true, the average is calculated and used to prepare the PLSR models. This new method distinctly improved the prediction results. To dissociate possible correlations between analyte concentrations, such as glucose and ethanol, the feeding analytes were alternately supplied at different concentrations (spiking) at the end of the four anaerobic fermentations. This procedure yielded low-error (anaerobic) PLSR models for predicting analyte concentrations of 0.31 g/l for biomass, 3.41 g/l for glucose, and 2.17 g/l for ethanol. The maximum concentrations were 14 g/l biomass, 167 g/l glucose, and 80 g/l ethanol. Data from the aerobic fermentation, carried out under high agitation and high aeration, were incorporated to realize combined PLSR models, which have not been previously reported to our knowledge.
NASA Astrophysics Data System (ADS)
Managave, S. R.; Jani, R. A.; Narayana Rao, T.; Sunilkumar, K.; Satheeshkumar, S.; Ramesh, R.
2016-08-01
Evaporation of rain is known to contribute water vapor, a potent greenhouse gas, to the atmosphere. Stable oxygen and hydrogen isotopic compositions (δ18O and, δD, respectively) of precipitation, usually measured/presented as values integrated over rain events or monthly mean values, are important tools for detecting evaporation effects. The slope ~8 of the linear relationship between such time-averaged values of δD and δ18O (called the meteoric water line) is widely accepted as a proof of condensation under isotopic equilibrium and absence of evaporation of rain during atmospheric fall. Here, through a simultaneous investigation of the isotopic and drop size distributions of seventeen rain events sampled on an intra-event scale at Gadanki (13.5°N, 79.2°E), southern India, we demonstrate that the evaporation effects, not evident in the time-averaged data, are significantly manifested in the sub-samples of individual rain events. We detect this through (1) slopes significantly less than 8 for the δD-δ18O relation on intra-event scale and (2) significant positive correlations between deuterium excess ( d-excess = δD - 8*δ18O; lower values in rain indicate evaporation) and the mass-weighted mean diameter of the raindrops ( D m ). An estimated ~44 % of rain is influenced by evaporation. This study also reveals a signature of isotopic equilibration of rain with the cloud base vapor, the processes important for modeling isotopic composition of precipitation. d-excess values of rain are modified by the post-condensation processes and the present approach offers a way to identify the d-excess values least affected by such processes. Isotope-enabled global circulation models could be improved by incorporating intra-event isotopic data and raindrop size dependent isotopic effects.
Xiaopeng, QI; Liang, WEI; BARKER, Laurie; LEKIACHVILI, Akaki; Xingyou, ZHANG
2015-01-01
Temperature changes are known to have significant impacts on human health. Accurate estimates of population-weighted average monthly air temperature for US counties are needed to evaluate temperature’s association with health behaviours and disease, which are sampled or reported at the county level and measured on a monthly—or 30-day—basis. Most reported temperature estimates were calculated using ArcGIS, relatively few used SAS. We compared the performance of geostatistical models to estimate population-weighted average temperature in each month for counties in 48 states using ArcGIS v9.3 and SAS v 9.2 on a CITGO platform. Monthly average temperature for Jan-Dec 2007 and elevation from 5435 weather stations were used to estimate the temperature at county population centroids. County estimates were produced with elevation as a covariate. Performance of models was assessed by comparing adjusted R2, mean squared error, root mean squared error, and processing time. Prediction accuracy for split validation was above 90% for 11 months in ArcGIS and all 12 months in SAS. Cokriging in SAS achieved higher prediction accuracy and lower estimation bias as compared to cokriging in ArcGIS. County-level estimates produced by both packages were positively correlated (adjusted R2 range=0.95 to 0.99); accuracy and precision improved with elevation as a covariate. Both methods from ArcGIS and SAS are reliable for U.S. county-level temperature estimates; However, ArcGIS’s merits in spatial data pre-processing and processing time may be important considerations for software selection, especially for multi-year or multi-state projects. PMID:26167169
NASA Astrophysics Data System (ADS)
Tohidnia, S.; Tohidi, G.
2018-02-01
The current paper develops three different ways to measure the multi-period global cost efficiency for homogeneous networks of processes when the prices of exogenous inputs are known at all time periods. A multi-period network data envelopment analysis model is presented to measure the minimum cost of the network system based on the global production possibility set. We show that there is a relationship between the multi-period global cost efficiency of network system and its subsystems, and also its processes. The proposed model is applied to compute the global cost Malmquist productivity index for measuring the productivity changes of network system and each of its process between two time periods. This index is circular. Furthermore, we show that the productivity changes of network system can be defined as a weighted average of the process productivity changes. Finally, a numerical example will be presented to illustrate the proposed approach.
The Path to Presidency: Tips for Teaching Elementary Students about the Election Process
ERIC Educational Resources Information Center
Conrad, Marika
2016-01-01
Teaching about presidential elections at the elementary level can seem a bit daunting at times. Students are quick to share their strong opinions on the current candidates running for office. These opinions often involve repeating feelings and phrases shared by parents around the dinner table the night before. For the average seven- or…
Turbulence production near walls: The role of flow structures with spanwise asymmetry
NASA Technical Reports Server (NTRS)
Alfredsson, P. Henrik; Johansson, Arne V.; Kim, John
1988-01-01
Space-time evolution of near wall flow structures is described by conditional sampling methods, in which conditional averages are formed at various stages of development of shear layer structures. The development of spanwise asymmetry of the structures was found to be important in the creation of the structures and for the process of turbulence production.
Code of Federal Regulations, 2013 CFR
2013-07-01
... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...
Code of Federal Regulations, 2011 CFR
2011-07-01
... finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1 Within the range...
Code of Federal Regulations, 2012 CFR
2012-07-01
... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...
Code of Federal Regulations, 2014 CFR
2014-07-01
... for the finishing water processes at a point source times the following pollutant concentrations: Subpart C [Finishing water] Concentration used to calculate BPT effluent limitations Pollutant or pollutant property Maximum for any 1 day (mg/l) Maximum for monthly average (mg/l) TSS 130 37 pH (1) (1) 1...
A Geomagnetic Estimate of Mean Paleointensity
NASA Technical Reports Server (NTRS)
Voorhies, Coerte
2004-01-01
To test a statistical hypothesis about Earth's magnetic field against paleomagnetism, the present field is used to estimate time averaged paleointensity. The estimate uses the modem magnetic multipole spectrum R(n), which gives the mean square induction represented by spherical harmonics of degree n averaged over the sphere of radius a = 6371.2 km. The hypothesis asserts that the low degree multipole powers of the core-source field are distributed as chi-squared with 2n+l degrees of freedom and expectation values {R(n)} = K[(n+l/2)/n(n+l](c/a)(sup 2n+4), where c is the 3480 km radius of Earth's core. (This is compatible with a usually mainly geocentric axial dipolar field). Amplitude K is estimated by fitting theoretical to observational spectra through degree 12. The resulting calibrated expectation spectrum is summed through degree 12 to estimate expected square intensity {F(sup 2)}. The sum also estimates {F(sup 2)} averaged over geologic time, in so far as the present magnetic spectrum is a fair sample of that generated in the past by core geodynamic processes.
A Lagrangian stochastic model for aerial spray transport above an oak forest
Wang, Yansen; Miller, David R.; Anderson, Dean E.; McManus, Michael L.
1995-01-01
An aerial spray droplets' transport model has been developed by applying recent advances in Lagrangian stochastic simulation of heavy particles. A two-dimensional Lagrangian stochastic model was adopted to simulate the spray droplet dispersion in atmospheric turbulence by adjusting the Lagrangian integral time scale along the drop trajectory. The other major physical processes affecting the transport of spray droplets above a forest canopy, the aircraft wingtip vortices and the droplet evaporation, were also included in each time step of the droplets' transport.The model was evaluated using data from an aerial spray field experiment. In generally neutral stability conditions, the accuracy of the model predictions varied from run-to-run as expected. The average root-mean-square error was 24.61 IU cm−2, and the average relative error was 15%. The model prediction was adequate in two-dimensional steady wind conditions, but was less accurate in variable wind condition. The results indicated that the model can simulate successfully the ensemble; average transport of aerial spray droplets under neutral, steady atmospheric wind conditions.
Doucette, Margaret R; Kurth, Salome; Chevalier, Nicolas; Munakata, Yuko; LeBourgeois, Monique K
2015-11-04
Cognitive development is influenced by maturational changes in processing speed, a construct reflecting the rapidity of executing cognitive operations. Although cognitive ability and processing speed are linked to spindles and sigma power in the sleep electroencephalogram (EEG), little is known about such associations in early childhood, a time of major neuronal refinement. We calculated EEG power for slow (10-13 Hz) and fast (13.25-17 Hz) sigma power from all-night high-density electroencephalography (EEG) in a cross-sectional sample of healthy preschool children (n = 10, 4.3 ± 1.0 years). Processing speed was assessed as simple reaction time. On average, reaction time was 1409 ± 251 ms; slow sigma power was 4.0 ± 1.5 μV²; and fast sigma power was 0.9 ± 0.2 μV². Both slow and fast sigma power predominated over central areas. Only slow sigma power was correlated with processing speed in a large parietal electrode cluster (p < 0.05, r ranging from -0.6 to -0.8), such that greater power predicted faster reaction time. Our findings indicate regional correlates between sigma power and processing speed that are specific to early childhood and provide novel insights into the neurobiological features of the EEG that may underlie developing cognitive abilities.
Fixed-point image orthorectification algorithms for reduced computational cost
NASA Astrophysics Data System (ADS)
French, Joseph Clinton
Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation to be used in place of the traditional floating point division. This method increases the throughput of the orthorectification operation by 38% when compared to floating point processing. Additionally, this method improves the accuracy of the existing integer-based orthorectification algorithms in terms of average pixel distance, increasing the accuracy of the algorithm by more than 5x. The quadratic function reduces the pixel position error to 2% and is still 2.8x faster than the 128-bit floating point algorithm.
Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations
Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán
2016-01-01
Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165
Intracellular applications of fluorescence correlation spectroscopy: prospects for neuroscience.
Kim, Sally A; Schwille, Petra
2003-10-01
Based on time-averaging fluctuation analysis of small fluorescent molecular ensembles in equilibrium, fluorescence correlation spectroscopy has recently been applied to investigate processes in the intracellular milieu. The exquisite sensitivity of fluorescence correlation spectroscopy provides access to a multitude of measurement parameters (rates of diffusion, local concentration, states of aggregation and molecular interactions) in real time with fast temporal and high spatial resolution. The introduction of dual-color cross-correlation, imaging, two-photon excitation, and coincidence analysis coupled with fluorescence correlation spectroscopy has expanded the utility of the technique to encompass a wide range of promising applications in living cells that may provide unprecedented insight into understanding the molecular mechanisms of intracellular neurobiological processes.
Relativistic electron plasma oscillations in an inhomogeneous ion background
NASA Astrophysics Data System (ADS)
Karmakar, Mithun; Maity, Chandan; Chakrabarti, Nikhil
2018-06-01
The combined effect of relativistic electron mass variation and background ion inhomogeneity on the phase mixing process of large amplitude electron oscillations in cold plasmas have been analyzed by using Lagrangian coordinates. An inhomogeneity in the ion density is assumed to be time-independent but spatially periodic, and a periodic perturbation in the electron density is considered as well. An approximate space-time dependent solution is obtained in the weakly-relativistic limit by employing the Bogolyubov and Krylov method of averaging. It is shown that the phase mixing process of relativistically corrected electron oscillations is strongly influenced by the presence of a pre-existing ion density ripple in the plasma background.
[Analysis of main risk factors causing foodborne diseases in food catering business].
Fan, Yong-xiang; Liu, Xiu-mei; Bao, Yi-dan
2011-06-01
To study main risk factors that cause foodborne diseases in food catering business. Data from references and investigations conducted in food catering units were used to establish models which based on @Risk 4.5 with Monte Carlo method referring to food handling practice model (FHPM) to make risk assessment on factors of food contamination in food catering units. The Beta-Poisson models on dose-response relationship to Salmonella (developed by WHO/FAO and United States Department of Agriculture) and Vibrio parahaemolyticus (developed by US FDA) were used in this article to analyze the dose-response relationship of pathogens. The average probability of food poisoning by consuming Salmonella contaminated cooked meat under refrigeration was 1.96 × 10(-4) which was 1/2800 of the food under non-refrigeration (the average probability of food poisoning was 0.35 at room temperature 25°C). The average probability by consuming 6 hours stored meat under room temperature was 0.11 which was 16 times of 2 hours storage (6.79 × 10(-3)). The average probability by consuming contaminated meat without fully cooking was 1.71 × 10(-4) which was 100 times of consuming fully cooked meat (1.88 × 10(-6)). The probability growth of food poisoning by consuming Vibrio parahaemolyticus contaminated fresh seafood was proportional with contamination level and prevalence. The primary contamination level, storage temperature and time, cooking process and cross contamination are important factors of catering food safety.
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
Single-shot Monitoring of Ultrafast Processes via X-ray Streaking at a Free Electron Laser.
Buzzi, Michele; Makita, Mikako; Howald, Ludovic; Kleibert, Armin; Vodungbo, Boris; Maldonado, Pablo; Raabe, Jörg; Jaouen, Nicolas; Redlin, Harald; Tiedtke, Kai; Oppeneer, Peter M; David, Christian; Nolting, Frithjof; Lüning, Jan
2017-08-03
The advent of x-ray free electron lasers has extended the unique capabilities of resonant x-ray spectroscopy techniques to ultrafast time scales. Here, we report on a novel experimental method that allows retrieving with a single x-ray pulse the time evolution of an ultrafast process, not only at a few discrete time delays, but continuously over an extended time window. We used a single x-ray pulse to resolve the laser-induced ultrafast demagnetisation dynamics in a thin cobalt film over a time window of about 1.6 ps with an excellent signal to noise ratio. From one representative single shot measurement we extract a spin relaxation time of (130 ± 30) fs with an average value, based on 193 single shot events of (113 ± 20) fs. These results are limited by the achieved experimental time resolution of 120 fs, and both values are in excellent agreement with previous results and theoretical modelling. More generally, this new experimental approach to ultrafast x-ray spectroscopy paves the way to the study of non-repetitive processes that cannot be investigated using traditional repetitive pump-probe schemes.
NASA Astrophysics Data System (ADS)
van Broekhuizen, Pieter; van Broekhuizen, Fleur; Cornelissen, Ralf; Reijnders, Lucas
2012-03-01
Nano reference values (NRVs) for occupational use of nanomaterials were tested as provisional substitute for Occupational Exposure Limits (OELs). NRVs can be used as provisional limit values until Health-Based OELs or derived no-effect levels (DNEL) become available. NRVs were defined for 8 h periods (time weighted average) and for short-term exposure periods (15 min-time weighted average). To assess the usefulness of these NRVs, airborne number concentrations of nanoparticles (NPs) in the workplace environment were measured during paint manufacturing, electroplating, light equipment manufacturing, non-reflective glass production, production of pigment concentrates and car refinishing. Activities monitored were handling of solid engineered NPs (ENP), abrasion, spraying and heating during occupational use of nanomaterials (containing ENPs) and machining nanosurfaces. The measured concentrations are often presumed to contain ENPs as well as process-generated NPs (PGNP). The PGNP are found to be a significant source for potential exposure and cannot be ignored in risk assessment. Levels of NPs identified in workplace air were up to several millions of nanoparticles/cm3. Conventional components in paint manufacturing like CaCO3 and talc may contain a substantial amount of nanosized particulates giving rise to airborne nanoparticle concentrations. It is argued that risk assessments carried out for e.g. paint manufacturing processes using conventional non-nano components should take into account potential nanoparticle emissions as well. The concentrations measured were compared with particle-based NRVs and with mass-based values that have also been proposed for workers protection. It is concluded that NRVs can be used for risk management for handling or processing of nanomaterials at workplaces provided that the scope of NRVs is not limited to ENPs only, but extended to the exposure to process-generated NPs as well.
Therapeutic plasma exchange: a paired comparison of Fresenius AS104 vs. COBE Spectra.
Burgstaler, E A; Pineda, A A
2001-01-01
For therapeutic plasma exchange (TPE), continuous flow separators are known to be efficient as exemplified by Fresenius AS104 and COBE Spectra. The AS104 uses an interface monitoring system in the centrifuge during TPE, whereas Spectra uses computer algorithms to establish the plasma-cell interface. To determine the plasma collection efficiency (PLCE), anticoagulant (AC) volumes used, and platelets (PLT) lost of the AS104 and the Spectra, we performed a prospective paired comparison of 20 TPE (each machine). The study included 17 patients, 1.3 plasma volume exchanges (without AC), equal inlet rates, and AC ratio of 13:1. Processing times did not include reinfuse mode. Platelet loss was determined by sampling the collection bags. Inlet rates were between 60-110 ml/min. Diagnosis included peripheral neuropathies, TTP and cryoglobulinemia. The AS104 had significantly (P<0.0001) lower average whole blood processed (F:6,601 vs. S:8,584 ml), AC volume (F:532 vs. S:719 ml), and processing time (F:80 vs. S:102 minutes) than Spectra. The AS104 had significantly (P<0.0001) higher average plasma flow rates (F:53 vs. S:44 ml/minute), plasma collection efficiency (F:90 vs. S:69%), and platelet loss (F:2.0 vs. S:0.14 x 10(11) plt) than Spectra. Platelet loss correlated with inlet flow rate with the AS104 but not with the Spectra. The AS104 has a significantly higher collection efficiency than Spectra allowing it to remove the same amount of plasma in significantly less time, by processing significantly less blood, using significantly less AC, but removing significantly more platelets than Spectra. Copyright 2001 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Alboabidallah, Ahmed; Martin, John; Lavender, Samantha; Abbott, Victor
2017-09-01
Terrestrial Laser Scanning (TLS) processing for biomass mapping involves large data volumes, and often includes relatively slow 3D object fitting steps that increase the processing time. This study aimed to test new features that can speed up the overall processing time. A new type of 3D voxel is used, where the horizontal layers are parallel to the Digital Terrain Model. This voxel type allows procedures to extract tree diameters using just one layer, but still gives direct tree-height estimations. Layer intersection is used to emphasize the trunks as upright standing objects, which are detected in the spatially segmented intersection of the breast-height voxels and then extended upwards and downwards. The diameters were calculated by fitting elliptical cylinders to the laser points in the detected trunk segments. Non-trunk segments, used in sub-tree- structures, were found using the parent-child relationships between successive layers. The branches were reconstructed by skeletonizing each sub-tree branch, and the biomass was distributed statistically amongst the weighted skeletons. The procedure was applied to nine plots within the UK. The average correlation coefficients between reconstructed and directly measured tree diameters, heights and branches were R2 = 0.92, 0.97 and 0.59 compared to 0.91, 0.95, and 0.63 when cylindrical fitting was used. The average time to apply the method reduced from 5hrs:18mins per plot, for the conventional methods, to 2hrs:24mins when the same hardware and software libraries were used with the 3D voxels. These results indicate that this 3D voxel method can produce, much more quickly, results of a similar accuracy that would improve efficiency if applied to projects with large volume TLS datasets.
NASA Astrophysics Data System (ADS)
Lee, Minhee; Wang, Sookyun; Kim, Seyoon; Park, Jinyoung
2015-04-01
Lab scale experiments were performed to investigate the property changes of sandstone slabs and cores, resulting from the scCO2-rock-groundwater reaction for 180 days under CO2 sequestration conditions (100 bar and 50 °C). The geochemical reactions, including the surface roughness change of minerals in the slab, resulted from the dissolution and the secondary mineral precipitation for the sandstone reservoir of the Gyeongsang basin, Korea were reproduced in laboratory scale experiments and the relationship between the geochemical reaction and the physical rock property change was derived, for the consideration of successful subsurface CO2 sequestration. The use of the surface roughness value (SRrms) change rate and the physical property change rate to quantify scCO2-rock-groundwater reaction is the novel approach on the study area for CO2 sequestration in the subsurface. From the results of SPM (Scanning Probe Microscope) analyses, the SRrms for each sandstone slab was calculated at different reaction time. The average SRrms increased more than 3.5 times during early 90 days reaction and it continued to be steady after 90 days, suggesting that the surface weathering process of sandstone occurred in the early reaction time after CO2 injection into the subsurface reservoir. The average porosity of sandstone cores increased by 8.8 % and the average density decreased by 0.5 % during 90 days reaction and these values slightly changed after 90 days. The average P and S wave velocities of sandstone cores also decreased by 10 % during 90 days reaction. The trend of physical rock property change during the geochemical reaction showed in a logarithmic manner and it was also correlated to the logarithmic increase in SRrms, suggesting that the physical property change of reservoir rocks originated from scCO2 injection directly comes from the geochemical reaction process. Results suggested that the long-term estimation of the physical property change for reservoir rocks in CO2 injection site could be possible from the extrapolation process of SRrms and rocks property change rates, acquired from laboratory scale experiments. It will be aslo useful to determine the favorite CO2 injection site from the viewpoint of the safety.
Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone
NASA Astrophysics Data System (ADS)
Xue, Lian; Li, Haibing; Brodsky, Emily
2013-04-01
Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (~200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ~ 30o, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties after Wenchuan earthquake. We observed the permeability decreases 35% per year. For the purpose of comparison, we convert the permeability measurements to into equivalent seismic velocity. The possible range of seismic wave velocity increase is 0.03%~ 0.8% per year. Our results are comparable to the results of the previous hydraulic and seismic studies after earthquakes. This temporal decrease of permeability may reflect the healing process after Wenchuan Earthquake.
Continuous Record of Permeability inside the Wenchuan Earthquake Fault Zone
NASA Astrophysics Data System (ADS)
Xue, L.; Li, H.; Brodsky, E. E.; Wang, H.; Pei, J.
2012-12-01
Faults are complex hydrogeological structures which include a highly permeable damage zone with fracture-dominated permeability. Since fractures are generated by earthquakes, we would expect that in the aftermath of a large earthquake, the permeability would be transiently high in a fault zone. Over time, the permeability may recover due to a combination of chemical and mechanical processes. However, the in situ fault zone hydrological properties are difficult to measure and have never been directly constrained on a fault zone immediately after a large earthquake. In this work, we use water level response to solid Earth tides to constrain the hydraulic properties inside the Wenchuan Earthquake Fault Zone. The transmissivity and storage determine the phase and amplitude response of the water level to the tidal loading. By measuring phase and amplitude response, we can constrain the average hydraulic properties of the damage zone at 800-1200 m below the surface (˜200-600 m from the principal slip zone). We use Markov chain Monte Carlo methods to evaluate the phase and amplitude responses and the corresponding errors for the largest semidiurnal Earth tide M2 in the time domain. The average phase lag is ˜30°, and the average amplitude response is 6×10-7 strain/m. Assuming an isotropic, homogenous and laterally extensive aquifer, the average storage coefficient S is 2×10-4 and the average transmissivity T is 6×10-7 m2 using the measured phase and the amplitude response. Calculation for the hydraulic diffusivity D with D=T/S, yields the reported value of D is 3×10-3 m2/s, which is two orders of magnitude larger than pump test values on the Chelungpu Fault which is the site of the Mw 7.6 Chi-Chi earthquake. If the value is representative of the fault zone, then this means the hydrology processes should have an effect on the earthquake rupture process. This measurement is done through continuous monitoring and we could track the evolution for hydraulic properties after Wenchuan earthquake. We observed the permeability decreases 35% per year. For the purpose of comparison, we convert the permeability measurements to into equivalent seismic velocity. The possible range of seismic wave velocity increase is 0.03%~ 0.8% per year. Our results are comparable to the results of the previous hydraulic and seismic studies after earthquakes. This temporal decrease of permeability may reflect the healing process after Wenchuan Earthquake.
Detonation Reaction Zones in Condensed Explosives
NASA Astrophysics Data System (ADS)
Tarver, Craig M.
2006-07-01
Experimental measurements using nanosecond time resolved embedded gauges and laser interferometric techniques, combined with Non-Equilibrium Zeldovich - von Neumann - Doling (NEZND) theory and Ignition and Growth reactive flow hydrodynamic modeling, have revealed the average pressure/particle velocity states attained in reaction zones of self-sustaining detonation waves in several solid and liquid explosives. The time durations of these reaction zone processes are discussed for explosives based on pentaerythritol tetranitrate (PETN), nitromethane, octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX), triaminitrinitrobenzene(TATB) and trinitrotoluene (TNT).
Universal approach to analysis of cavitation and liquid-impingement erosion data
NASA Technical Reports Server (NTRS)
Rao, P. V.; Young, S. G.
1982-01-01
Cavitation erosion experimental data was analyzed by using normalization and curve-fitting techniques. Data were taken from experiments on several materials tested in both a rotating disk device and a magnetostriction apparatus. Cumulative average volume loss rate and time data were normalized relative to the peak erosion rate and the time to peak erosion rate, respectively. From this process a universal approach was derived that can include data on specific materials from different test devices for liquid impingement and cavitation erosion studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xi, P. W.; Lawrence Livermore National Laboratory, Livermore, California 94550; Xu, X. Q.
We demonstrate that the occurrence of Edge-Localized-Modes (ELM) crashes does not depend only on the linear peeling-ballooning threshold, but also relies on nonlinear processes. Wave-wave interaction constrains the growth time of a mode, thus inducing a shift in the criterion for triggering an ELM crash. An ELM crash requires the P-B growth rate to exceed a critical value γ>γ{sub c}, where γ{sub c} is set by 1/τ{sup ¯}{sub c}, and τ{sup ¯}{sub c} is the averaged mode phase coherence time. For 0
A Metric to Quantify Shared Visual Attention in Two-Person Teams
NASA Technical Reports Server (NTRS)
Gontar, Patrick; Mulligan, Jeffrey B.
2015-01-01
Introduction: Critical tasks in high-risk environments are often performed by teams, the members of which must work together efficiently. In some situations, the team members may have to work together to solve a particular problem, while in others it may be better for them to divide the work into separate tasks that can be completed in parallel. We hypothesize that these two team strategies can be differentiated on the basis of shared visual attention, measured by gaze tracking. 2) Methods: Gaze recordings were obtained for two-person flight crews flying a high-fidelity simulator (Gontar, Hoermann, 2014). Gaze was categorized with respect to 12 areas of interest (AOIs). We used these data to construct time series of 12 dimensional vectors, with each vector component representing one of the AOIs. At each time step, each vector component was set to 0, except for the one corresponding to the currently fixated AOI, which was set to 1. This time series could then be averaged in time, with the averaging window time (t) as a variable parameter. For example, when we average with a t of one minute, each vector component represents the proportion of time that the corresponding AOI was fixated within the corresponding one minute interval. We then computed the Pearson product-moment correlation coefficient between the gaze proportion vectors for each of the two crew members, at each point in time, resulting in a signal representing the time-varying correlation between gaze behaviors. We determined criteria for concluding correlated gaze behavior using two methods: first, a permutation test was applied to the subjects' data. When one crew member's gaze proportion vector is correlated with a random time sample from the other crewmember's data, a distribution of correlation values is obtained that differs markedly from the distribution obtained from temporally aligned samples. In addition to validating that the gaze tracker was functioning reasonably well, this also allows us to compute probabilities of coordinated behavior for each value of the correlation. As an alternative, we also tabulated distributions of correlation coefficients for synthetic data sets, in which the behavior was modeled as a first-order Markov process, and compared correlation distributions for identical processes with those for disparate processes, allowing us to choose criteria and estimate error rates. 3) Discussion: Our method of gaze correlation is able to measure shared visual attention, and can distinguish between activities involving different instruments. We plan to analyze whether pilots strategies of sharing visual attention can predict performance. Possible measurements of performance include expert ratings from instructors, fuel consumption, total task time, and failure rate. While developed for two-person crews, our approach can be applied to larger groups, using intra-class correlation coefficients instead of the Pearson product-moment correlation.
Li, Bing-zhi; Xu, Xiang-yang; Zhu, Liang
2010-01-01
A treatability study of industrial wastewater containing chlorinated nitroaromatic compounds (CNACs) by a catalytic ozonation process (COP) with a modified Mn/Co ceramic catalyst and an aerobic sequencing batch reactor (SBR) was investigated. A preliminary attempt to treat the diluted wastewater with a single SBR resulted in ineffective removal of the color, ammonia, total organic carbon (TOC) and chemical oxygen demand (COD). Next, COP was applied as a pretreatment in order to obtain a bio-compatible wastewater for SBR treatment in a second step. The effectiveness of the COP pretreatment was assessed by evaluating wastewater biodegradability enhancement (the ratio of biology oxygen demand after 5 d (BOD5) to COD), as well as monitoring the evolution of TOC, carbon oxidation state (COS), average oxidation state (AOS), color, and major pollutant concentrations with reaction time. In the COP, the catalyst preserved its catalytic properties even after 70 reuse cycles, exhibiting good durability and stability. The performance of SBR to treat COP effluent was also examined. At an organic loading rate of 2.0 kg COD/(m3·d), with hydraulic retention time (HRT)=10 h and temperature (30±2) °C, the average removal efficiencies of NH3-N, COD, BOD5, TOC, and color in a coupled COP/SBR process were about 80%, 95.8%, 93.8%, 97.6% and 99.3%, respectively, with average effluent concentrations of 10 mg/L, 128 mg/L, 27.5 mg/L, 25.0 mg/L, and 20 multiples, respectively, which were all consistent with the national standards for secondary discharge of industrial wastewater into a public sewerage system (GB 8978-1996). The results indicated that the coupling of COP with a biological process was proved to be a technically and economically effective method for treating industrial wastewater containing recalcitrant CNACs. PMID:20205304
Depth-averaged instantaneous currents in a tidally dominated shelf sea from glider observations
NASA Astrophysics Data System (ADS)
Merckelbach, Lucas
2016-12-01
Ocean gliders have become ubiquitous observation platforms in the ocean in recent years. They are also increasingly used in coastal environments. The coastal observatory system COSYNA has pioneered the use of gliders in the North Sea, a shallow tidally energetic shelf sea. For operational reasons, the gliders operated in the North Sea are programmed to resurface every 3-5 h. The glider's dead-reckoning algorithm yields depth-averaged currents, averaged in time over each subsurface interval. Under operational conditions these averaged currents are a poor approximation of the instantaneous tidal current. In this work an algorithm is developed that estimates the instantaneous current (tidal and residual) from glider observations only. The algorithm uses a first-order Butterworth low pass filter to estimate the residual current component, and a Kalman filter based on the linear shallow water equations for the tidal component. A comparison of data from a glider experiment with current data from an acoustic Doppler current profilers deployed nearby shows that the standard deviations for the east and north current components are better than 7 cm s-1 in near-real-time mode and improve to better than 6 cm s-1 in delayed mode, where the filters can be run forward and backward. In the near-real-time mode the algorithm provides estimates of the currents that the glider is expected to encounter during its next few dives. Combined with a behavioural and dynamic model of the glider, this yields predicted trajectories, the information of which is incorporated in warning messages issued to ships by the (German) authorities. In delayed mode the algorithm produces useful estimates of the depth-averaged currents, which can be used in (process-based) analyses in case no other source of measured current information is available.
Chen, Ruifeng; Zhu, Lijun; Lv, Lihuo; Yao, Su; Li, Bin; Qian, Junqing
2017-06-01
Optimization of compatible solutes (ectoine) extraction and purification from Halomonas elongata cell fermentation had been investigated in the laboratory tests of a large scale commercial production project. After culturing H. elongata cells in developed medium at 28 °C for 23-30 h, we obtained an average yield and biomass of ectoine for 15.9 g/L and 92.9 (OD 600 ), respectively. Cell lysis was performed with acid treatment at moderate high temperature (60-70 °C). The downstream processing operations were designed to be as follows: filtration, desalination, cation exchange, extraction of crude product and three times of refining. Among which the cation exchange and extraction of crude product acquired a high average recovery rate of 95 and 96%; whereas a great loss rate of 19 and 15% was observed during the filtration and desalination, respectively. Combined with the recovering of ectoine from the mother liquor of the three times refining, the average of overall yield (referring to the amount of ectoine synthesized in cells) and purity of final product obtained were 43% and over 98%, respectively. However, key factors that affected the production efficiency were not yields but the time used in the extraction of crude product, involving the crystallization step from water, which spended 24-72 h according to the production scale. Although regarding to the productivity and simplicity on laboratory scale, the method described here can not compete with other investigations, in this study we acquired higher purity of ectoine and provided downstream processes that are capable of operating on industrial scale.
Naming Game with Multiple Hearers
NASA Astrophysics Data System (ADS)
Li, Bing; Chen, Guanrong; Chow, Tommy W. S.
2013-05-01
A new model called Naming Game with Multiple Hearers (NGMH) is proposed in this paper. A naming game over a population of individuals aims to reach consensus on the name of an object through pair-wise local interactions among all the individuals. The proposed NGMH model describes the learning process of a new word, in a population with one speaker and multiple hearers, at each interaction towards convergence. The characteristics of NGMH are examined on three types of network topologies, namely ER random-graph network, WS small-world network, and BA scale-free network. Comparative analysis on the convergence time is performed, revealing that the topology with a larger average (node) degree can reach consensus faster than the others over the same population. It is found that, for a homogeneous network, the average degree is the limiting value of the number of hearers, which reduces the individual ability of learning new words, consequently decreasing the convergence time; for a scale-free network, this limiting value is the deviation of the average degree. It is also found that a network with a larger clustering coefficient takes longer time to converge; especially a small-word network with smallest rewiring possibility takes longest time to reach convergence. As more new nodes are being added to scale-free networks with different degree distributions, their convergence time appears to be robust against the network-size variation. Most new findings reported in this paper are different from that of the single-speaker/single-hearer naming games documented in the literature.
2015-01-01
Several competing aetiologies of developmental dyslexia suggest that the problems with acquiring literacy skills are causally entailed by low-level auditory and/or speech perception processes. The purpose of this study is to evaluate the diverging claims about the specific deficient peceptual processes under conditions of strong inference. Theoretically relevant acoustic features were extracted from a set of artificial speech stimuli that lie on a /bAk/-/dAk/ continuum. The features were tested on their ability to enable a simple classifier (Quadratic Discriminant Analysis) to reproduce the observed classification performance of average and dyslexic readers in a speech perception experiment. The ‘classical’ features examined were based on component process accounts of developmental dyslexia such as the supposed deficit in Envelope Rise Time detection and the deficit in the detection of rapid changes in the distribution of energy in the frequency spectrum (formant transitions). Studies examining these temporal processing deficit hypotheses do not employ measures that quantify the temporal dynamics of stimuli. It is shown that measures based on quantification of the dynamics of complex, interaction-dominant systems (Recurrence Quantification Analysis and the multifractal spectrum) enable QDA to classify the stimuli almost identically as observed in dyslexic and average reading participants. It seems unlikely that participants used any of the features that are traditionally associated with accounts of (impaired) speech perception. The nature of the variables quantifying the temporal dynamics of the speech stimuli imply that the classification of speech stimuli cannot be regarded as a linear aggregate of component processes that each parse the acoustic signal independent of one another, as is assumed by the ‘classical’ aetiologies of developmental dyslexia. It is suggested that the results imply that the differences in speech perception performance between average and dyslexic readers represent a scaled continuum rather than being caused by a specific deficient component. PMID:25834769
Brigati, Jennifer R.; Swann, Jerilyn M.
2015-01-01
Incorporating peer-review steps in the laboratory report writing process provides benefits to students, but it also can create additional work for laboratory instructors. The laboratory report writing process described here allows the instructor to grade only one lab report for every two to four students, while giving the students the benefits of peer review and prompt feedback on their laboratory reports. Here we present the application of this process to a sophomore level genetics course and a freshman level cellular biology course, including information regarding class time spent on student preparation activities, instructor preparation, prerequisite student knowledge, suggested learning outcomes, procedure, materials, student instructions, faculty instructions, assessment tools, and sample data. T-tests comparing individual and group grading of the introductory cell biology lab reports yielded average scores that were not significantly different from each other (p = 0.13, n = 23 for individual grading, n = 6 for group grading). T-tests also demonstrated that average laboratory report grades of students using the peer-review process were not significantly different from those of students working alone (p = 0.98, n = 9 for individual grading, n = 6 for pair grading). While the grading process described here does not lead to statistically significant gains (or reductions) in student learning, it allows student learning to be maintained while decreasing instructor workload. This reduction in workload could allow the instructor time to pursue other high-impact practices that have been shown to increase student learning. Finally, we suggest possible modifications to the procedure for application in a variety of settings. PMID:25949758
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, DL
2008-09-30
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near real-time. Raw and processed data are then sent daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the Archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and month formore » the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 – (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the fourth quarter of FY 2008 for the Southern Great Plains (SGP) site is 2,097.60 hours (0.95 x 2,208 hours this quarter). The OPSMAX for the North Slope Alaska (NSA) locale is 1,987.20 hours (0.90 x 2,208), and for the Tropical Western Pacific (TWP) locale is 1,876.80 hours (0.85 x 2,208). The OPSMAX time for the ARM Mobile Facility (AMF) is not reported this quarter because the data have not yet been released from China to the DMF for processing. The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 92 days for this quarter) the instruments were operating this quarter.« less
NASA Technical Reports Server (NTRS)
Kelecy, Tom; Payne, Tim; Thurston, Robin; Stansbery, Gene
2007-01-01
A population of deep space objects is thought to be high area-to-mass ratio (AMR) debris having origins from sources in the geosynchronous orbit (GEO) belt. The typical AMR values have been observed to range anywhere from 1's to 10's of m(sup 2)/kg, and hence, higher than average solar radiation pressure effects result in long-term migration of eccentricity (0.1-0.6) and inclination over time. However, the nature of the debris orientation-dependent dynamics also results time-varying solar radiation forces about the average which complicate the short-term orbit determination processing. The orbit determination results are presented for several of these debris objects, and highlight their unique and varied dynamic attributes. Estimation or the solar pressure dynamics over time scales suitable for resolving the shorter term dynamics improves the orbit estimation, and hence, the orbit predictions needed to conduct follow-up observations.
Jeon, Jae-Hyung; Metzler, Ralf
2010-02-01
Motivated by subdiffusive motion of biomolecules observed in living cells, we study the stochastic properties of a non-Brownian particle whose motion is governed by either fractional Brownian motion or the fractional Langevin equation and restricted to a finite domain. We investigate by analytic calculations and simulations how time-averaged observables (e.g., the time-averaged mean-squared displacement and displacement correlation) are affected by spatial confinement and dimensionality. In particular, we study the degree of weak ergodicity breaking and scatter between different single trajectories for this confined motion in the subdiffusive domain. The general trend is that deviations from ergodicity are decreased with decreasing size of the movement volume and with increasing dimensionality. We define the displacement correlation function and find that this quantity shows distinct features for fractional Brownian motion, fractional Langevin equation, and continuous time subdiffusion, such that it appears an efficient measure to distinguish these different processes based on single-particle trajectory data.
A Low-Cost System Based on Image Analysis for Monitoring the Crystal Growth Process.
Venâncio, Fabrício; Rosário, Francisca F do; Cajaiba, João
2017-05-31
Many techniques are used to monitor one or more of the phenomena involved in the crystallization process. One of the challenges in crystal growth monitoring is finding techniques that allow direct interpretation of the data. The present study used a low-cost system, composed of a commercial webcam and a simple white LED (Light Emitting Diode) illuminator, to follow the calcium carbonate crystal growth process. The experiments were followed with focused beam reflectance measurement (FBRM), a common technique for obtaining information about the formation and growth of crystals. The images obtained in real time were treated with the red, blue, and green (RGB) system. The results showed a qualitative response of the system to crystal formation and growth processes, as there was an observed decrease in the signal as the growth process occurred. Control of the crystal growth was managed by increasing the viscosity of the test solution with the addition of monoethylene glycol (MEG) at 30% and 70% in a mass to mass relationship, providing different profiles of the RGB average curves. The decrease in the average RGB value became slower as the concentration of MEG was increased; this reflected a lag in the growth process that was proven by the FBRM.
Charging a capacitor from an external fluctuating potential using a single conical nanopore.
Gomez, Vicente; Ramirez, Patricio; Cervera, Javier; Nasir, Saima; Ali, Mubarak; Ensinger, Wolfgang; Mafe, Salvador
2015-04-01
We explore the electrical rectification of large amplitude fluctuating signals by an asymmetric nanostructure operating in aqueous solution. We show experimentally and theoretically that a load capacitor can be charged to voltages close to 1 V within a few minutes by converting zero time-average potentials of amplitudes in the range 0.5-3 V into average net currents using a single conical nanopore. This process suggests that significant energy conversion and storage from an electrically fluctuating environment is feasible with a nanoscale pore immersed in a liquid electrolyte solution, a system characteristic of bioelectronics interfaces, electrochemical cells, and nanoporous membranes.
Charging a Capacitor from an External Fluctuating Potential using a Single Conical Nanopore
Gomez, Vicente; Ramirez, Patricio; Cervera, Javier; Nasir, Saima; Ali, Mubarak; Ensinger, Wolfgang; Mafe, Salvador
2015-01-01
We explore the electrical rectification of large amplitude fluctuating signals by an asymmetric nanostructure operating in aqueous solution. We show experimentally and theoretically that a load capacitor can be charged to voltages close to 1 V within a few minutes by converting zero time-average potentials of amplitudes in the range 0.5–3 V into average net currents using a single conical nanopore. This process suggests that significant energy conversion and storage from an electrically fluctuating environment is feasible with a nanoscale pore immersed in a liquid electrolyte solution, a system characteristic of bioelectronics interfaces, electrochemical cells, and nanoporous membranes. PMID:25830563
Maximum likelihood estimation for periodic autoregressive moving average models
Vecchia, A.V.
1985-01-01
A useful class of models for seasonal time series that cannot be filtered or standardized to achieve second-order stationarity is that of periodic autoregressive moving average (PARMA) models, which are extensions of ARMA models that allow periodic (seasonal) parameters. An approximation to the exact likelihood for Gaussian PARMA processes is developed, and a straightforward algorithm for its maximization is presented. The algorithm is tested on several periodic ARMA(1, 1) models through simulation studies and is compared to moment estimation via the seasonal Yule-Walker equations. Applicability of the technique is demonstrated through an analysis of a seasonal stream-flow series from the Rio Caroni River in Venezuela.
Plaie, Thierry; Thomas, Delphine
2008-06-01
Our study specifies the contributions of image generation and image maintenance processes occurring at the time of imaginal coding of verbal information in memory during normal aging. The memory capacities of 19 young adults (average age of 24 years) and 19 older adults (average age of 75 years) were assessed using recall tasks according to the imagery value of the stimuli to learn. The mental visual imagery capacities are assessed using tasks of image generation and temporary storage of mental imagery. The variance analysis indicates a more important decrease with age of the concretness effect. The major contribution of our study rests on the fact that the decline with age of dual coding of verbal information in memory would result primarily from the decline of image maintenance capacities and from a slowdown in image generation. (PsycINFO Database Record (c) 2008 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
A collaborative approach to lean laboratory workstation design reduces wasted technologist travel.
Yerian, Lisa M; Seestadt, Joseph A; Gomez, Erron R; Marchant, Kandice K
2012-08-01
Lean methodologies have been applied in many industries to reduce waste. We applied Lean techniques to redesign laboratory workstations with the aim of reducing the number of times employees must leave their workstations to complete their tasks. At baseline in 68 workflows (aggregates or sequence of process steps) studied, 251 (38%) of 664 tasks required workers to walk away from their workstations. After analysis and redesign, only 59 (9%) of the 664 tasks required technologists to leave their workstations to complete these tasks. On average, 3.4 travel events were removed for each workstation. Time studies in a single laboratory section demonstrated that workers spend 8 to 70 seconds in travel each time they step away from the workstation. The redesigned workstations will allow employees to spend less time travelling around the laboratory. Additional benefits include employee training in waste identification, improved overall laboratory layout, and identification of other process improvement opportunities in our laboratory.
Mean first passage time of active Brownian particle in one dimension
NASA Astrophysics Data System (ADS)
Scacchi, A.; Sharma, A.
2018-02-01
We investigate the mean first passage time of an active Brownian particle in one dimension using numerical simulations. The activity in one dimension is modelled as a two state model; the particle moves with a constant propulsion strength but its orientation switches from one state to other as in a random telegraphic process. We study the influence of a finite resetting rate r on the mean first passage time to a fixed target of a single free active Brownian particle and map this result using an effective diffusion process. As in the case of a passive Brownian particle, we can find an optimal resetting rate r* for an active Brownian particle for which the target is found with the minimum average time. In the case of the presence of an external potential, we find good agreement between the theory and numerical simulations using an effective potential approach.
Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm
NASA Astrophysics Data System (ADS)
Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.
2014-08-01
This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.
Dose-dependent model of caffeine effects on human vigilance during total sleep deprivation.
Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Wesensten, Nancy J; Kamimori, Gary H; Balkin, Thomas J; Reifman, Jaques
2014-10-07
Caffeine is the most widely consumed stimulant to counter sleep-loss effects. While the pharmacokinetics of caffeine in the body is well-understood, its alertness-restoring effects are still not well characterized. In fact, mathematical models capable of predicting the effects of varying doses of caffeine on objective measures of vigilance are not available. In this paper, we describe a phenomenological model of the dose-dependent effects of caffeine on psychomotor vigilance task (PVT) performance of sleep-deprived subjects. We used the two-process model of sleep regulation to quantify performance during sleep loss in the absence of caffeine and a dose-dependent multiplier factor derived from the Hill equation to model the effects of single and repeated caffeine doses. We developed and validated the model fits and predictions on PVT lapse (number of reaction times exceeding 500 ms) data from two separate laboratory studies. At the population-average level, the model captured the effects of a range of caffeine doses (50-300 mg), yielding up to a 90% improvement over the two-process model. Individual-specific caffeine models, on average, predicted the effects up to 23% better than population-average caffeine models. The proposed model serves as a useful tool for predicting the dose-dependent effects of caffeine on the PVT performance of sleep-deprived subjects and, therefore, can be used for determining caffeine doses that optimize the timing and duration of peak performance. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Schrön, Martin; Köhli, Markus; Scheiffele, Lena; Iwema, Joost; Bogena, Heye R.; Lv, Ling; Martini, Edoardo; Baroni, Gabriele; Rosolem, Rafael; Weimar, Jannis; Mai, Juliane; Cuntz, Matthias; Rebmann, Corinna; Oswald, Sascha E.; Dietrich, Peter; Schmidt, Ulrich; Zacharias, Steffen
2017-10-01
In the last few years the method of cosmic-ray neutron sensing (CRNS) has gained popularity among hydrologists, physicists, and land-surface modelers. The sensor provides continuous soil moisture data, averaged over several hectares and tens of decimeters in depth. However, the signal still may contain unidentified features of hydrological processes, and many calibration datasets are often required in order to find reliable relations between neutron intensity and water dynamics. Recent insights into environmental neutrons accurately described the spatial sensitivity of the sensor and thus allowed one to quantify the contribution of individual sample locations to the CRNS signal. Consequently, data points of calibration and validation datasets are suggested to be averaged using a more physically based weighting approach. In this work, a revised sensitivity function is used to calculate weighted averages of point data. The function is different from the simple exponential convention by the extraordinary sensitivity to the first few meters around the probe, and by dependencies on air pressure, air humidity, soil moisture, and vegetation. The approach is extensively tested at six distinct monitoring sites: two sites with multiple calibration datasets and four sites with continuous time series datasets. In all cases, the revised averaging method improved the performance of the CRNS products. The revised approach further helped to reveal hidden hydrological processes which otherwise remained unexplained in the data or were lost in the process of overcalibration. The presented weighting approach increases the overall accuracy of CRNS products and will have an impact on all their applications in agriculture, hydrology, and modeling.
A digital boxcar integrator for IMS spectra
NASA Technical Reports Server (NTRS)
Cohen, Martin J.; Stimac, Robert M.; Wernlund, Roger F.; Parker, Donald C.
1995-01-01
When trying to detect or quantify a signal at or near the limit of detectability, it is invariably embeded in the noise. This statement is true for nearly all detectors of any physical phenomena and the limit of detectability, hopefully, occurs at very low signal-to-noise levels. This is particularly true of IMS (Ion Mobility Spectrometers) spectra due to the low vapor pressure of several chemical compounds of great interest and the small currents associated with the ionic detection process. Gated Integrators and Boxcar Integrators or Averagers are designed to recover fast, repetitive analog signals. In a typical application, a time 'Gate' or 'Window' is generated, characterized by a set delay from a trigger or gate pulse and a certain width. A Gated Integrator amplifies and integrates the signal that is present during the time the gate is open, ignoring noise and interference that may be present at other times. Boxcar Integration refers to the practice of averaging the output of the Gated Integrator over many sweeps of the detector. Since any signal present during the gate will add linearly, while noise will add in a 'random walk' fashion as the square root of the number of sweeps, averaging N sweeps will improve the 'Signal-to-Noise Ratio' by a factor of the square root of N.
NASA Astrophysics Data System (ADS)
Wada, Hiroshi; Ando, Masayoshi; Takeuchi, Masataka; Sugawara, Hironori; Koike, Takuji; Kobayashi, Toshimitsu; Hozawa, Koji; Gemma, Takashi; Nara, Makoto
2002-05-01
``Time-averaged holography'' and ``holographic interferometry'' enable recording of the complete vibration pattern of a surface within several seconds. The results appear in the form of fringes. Vibration amplitudes smaller than 100 nm are not readily measurable by these techniques, because such small amplitudes produce variations in gray level, but not fringes. In practice, to obtain clear fringes in these measurements, stimulus sound pressures higher than 100 dB SPL must be used. The phase of motion is also not obtainable from such fringe techniques. In this study, a sinusoidal phase modulation technique is described, which allows detection of both small amplitudes of motion and their phase from time-averaged speckle pattern interferometry. In this technique, the laser injection current is modulated and digital image processing is used to analyze the measured patterns. When the sound-pressure level of stimuli is between 70 and 85 dB SPL, this system is applied to measure the vibratory response of the tympanic membrane (TM) of guinea pig temporal bones at frequencies up to 4 kHz where complicated vibration modes are observed. The effect of the bulla on TM displacements is also quantified. Results indicate that this system is capable of measuring the nanometer displacements of the TM, produced by stimuli of 70 dB SPL.
Vector-averaged gravity does not alter acetylcholine receptor single channel properties
NASA Technical Reports Server (NTRS)
Reitstetter, R.; Gruener, R.
1994-01-01
To examine the physiological sensitivity of membrane receptors to altered gravity, we examined the single channel properties of the acetylcholine receptor (AChR), in co-cultures of Xenopus myocytes and neurons, to vector-averaged gravity in the clinostat. This experimental paradigm produces an environment in which, from the cell's perspective, the gravitational vector is "nulled" by continuous averaging. In that respect, the clinostat simulates one aspect of space microgravity where the gravity force is greatly reduced. After clinorotation, the AChR channel mean open-time and conductance were statistically not different from control values but showed a rotation-dependent trend that suggests a process of cellular adaptation to clinorotation. These findings therefore suggest that the ACHR channel function may not be affected in the microgravity of space despite changes in the receptor's cellular organization.
Holder, Simon J; Achilleos, Mariliz; Jones, Richard G
2006-09-27
In this communication, we will demonstrate that polymerization in a chiral solvent can affect the molecular weight distribution of the product by perturbing the balance of the P and M helical screw senses of the growing chains. Specifically, for the Wurtz-type synthesis of polymethylphenylsilane (PMPS) in either (R) or (S)-limonene, the weight-average molecular weight of the products (average Mw = 80 000) was twice that of PMPS synthesized in (R/S)-limonene (average Mw = 39 200). Peturbation of the helical segmentation along the polymer chains leads to a reduction in the rate of occurrence of a key termination step. This the first time that a chiral solvent has been demonstrated to have such an effect on a polymerization process in affecting molecular weight parameters in contrast to affecting tacticity.
Lin, Tracy Kuo; Teymourian, Yasmin; Tursini, Maitri Shila
2018-04-14
Studies find that economic, political, and social globalization - as well as trade liberalization specifically - influence the prevalence of overweight and obesity in countries through increasing the availability and affordability of unhealthful food. However, what are the mechanisms that connect globalization, trade liberalization, and rising average body mass index (BMI)? We suggest that the various sub-components of globalization interact, leading individuals in countries that experience higher levels of globalization to prefer, import, and consume more imported sugar and processed food products than individuals in countries that experience lower levels of globalization. This study codes the amount of sugar and processed food imports in 172 countries from 1995 to 2010 using the United Nations Comtrade dataset. We employ country-specific fixed effects (FE) models, with robust standard errors, to examine the relationship between sugar and processed foods imports, globalization, and average BMI. To highlight further the relationship between the sugar and processed food import and average BMI, we employ a synthetic control method to calculate a counterfactual average BMI in Fiji. We find that sugar and processed food imports are part of the explanation to increasing average BMI in countries; after controlling for globalization and general imports and exports, sugar and processed food imports have a statistically and substantively significant effect in increasing average BMI. In the case of Fiji, the increased prevalence of obesity is associated with trade agreements and increased imports of sugar and processed food. The counterfactual estimates suggest that sugar and processed food imports are associated with a 0.5 increase in average BMI in Fiji.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compact 6-3-2more » has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Rooyen, Isabella Johanna; Demkowicz, Paul Andrew; Riesterer, Jessica Lori
2012-12-01
The electron microscopic examination of selected irradiated TRISO coated particles of the AGR-1 experiment of fuel compact 6-3-2 are presented in this report. Compact 6-3-2 refers to the compact in Capsule 6 at level 3 of Stack 2. The fuel used in capsule 6 compacts, are called the “baseline” fuel as it is fabricated with refined coating process conditions used to fabricate historic German fuel, because of its excellent irradiation performance with UO 2 kernels. The AGR-1 fuel is however made of low-enriched uranium oxycarbide (UCO). Kernel diameters are approximately 350 µm with a U-235 enrichment of approximately 19.7%. Compactmore » 6-3-2 has been irradiated to 11.3% FIMA compact average burn-up with a time average, volume average temperature of 1070.2°C and with a compact average fast fluence of 2.38E21 n/cm« less
NASA Astrophysics Data System (ADS)
Sahu, Anshuman Kumar; Chatterjee, Suman; Nayak, Praveen Kumar; Sankar Mahapatra, Siba
2018-03-01
Electrical discharge machining (EDM) is a non-traditional machining process which is widely used in machining of difficult-to-machine materials. EDM process can produce complex and intrinsic shaped component made of difficult-to-machine materials, largely applied in aerospace, biomedical, die and mold making industries. To meet the required applications, the EDMed components need to possess high accuracy and excellent surface finish. In this work, EDM process is performed using Nitinol as work piece material and AlSiMg prepared by selective laser sintering (SLS) as tool electrode along with conventional copper and graphite electrodes. The SLS is a rapid prototyping (RP) method to produce complex metallic parts by additive manufacturing (AM) process. Experiments have been carried out varying different process parameters like open circuit voltage (V), discharge current (Ip), duty cycle (τ), pulse-on-time (Ton) and tool material. The surface roughness parameter like average roughness (Ra), maximum height of the profile (Rt) and average height of the profile (Rz) are measured using surface roughness measuring instrument (Talysurf). To reduce the number of experiments, design of experiment (DOE) approach like Taguchi’s L27 orthogonal array has been chosen. The surface properties of the EDM specimen are optimized by desirability function approach and the best parametric setting is reported for the EDM process. Type of tool happens to be the most significant parameter followed by interaction of tool type and duty cycle, duty cycle, discharge current and voltage. Better surface finish of EDMed specimen can be obtained with low value of voltage (V), discharge current (Ip), duty cycle (τ) and pulse on time (Ton) along with the use of AlSiMg RP electrode.
Idle waves in high-performance computing
NASA Astrophysics Data System (ADS)
Markidis, Stefano; Vencels, Juris; Peng, Ivy Bo; Akhmetova, Dana; Laure, Erwin; Henri, Pierre
2015-01-01
The vast majority of parallel scientific applications distributes computation among processes that are in a busy state when computing and in an idle state when waiting for information from other processes. We identify the propagation of idle waves through processes in scientific applications with a local information exchange between the two processes. Idle waves are nondispersive and have a phase velocity inversely proportional to the average busy time. The physical mechanism enabling the propagation of idle waves is the local synchronization between two processes due to remote data dependency. This study provides a description of the large number of processes in parallel scientific applications as a continuous medium. This work also is a step towards an understanding of how localized idle periods can affect remote processes, leading to the degradation of global performance in parallel scientific applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sisterson, D. L.
2009-07-14
Individual raw data streams from instrumentation at the Atmospheric Radiation Measurement (ARM) Program Climate Research Facility (ACRF) fixed and mobile sites are collected and sent to the Data Management Facility (DMF) at Pacific Northwest National Laboratory (PNNL) for processing in near-real time. Raw and processed data are then sent approximately daily to the ACRF Archive, where they are made available to users. For each instrument, we calculate the ratio of the actual number of data records received daily at the archive to the expected number of data records. The results are tabulated by (1) individual data stream, site, and monthmore » for the current year and (2) site and fiscal year (FY) dating back to 1998. The U.S. Department of Energy (DOE) requires national user facilities to report time-based operating data. The requirements concern the actual hours of operation (ACTUAL); the estimated maximum operation or uptime goal (OPSMAX), which accounts for planned downtime; and the VARIANCE [1 - (ACTUAL/OPSMAX)], which accounts for unplanned downtime. The OPSMAX time for the third quarter of FY 2009 for the Southern Great Plains (SGP) site is 2,074.80 hours (0.95 x 2,184 hours this quarter); for the North Slope Alaska (NSA) locale it is 1,965.60 hours (0.90 x 2,184); and for the Tropical Western Pacific (TWP) locale it is 1,856.40 hours (0.85 x 2,184). The ARM Mobile Facility (AMF) was officially operational May 1 in Graciosa Island, the Azores, Portugal, so the OPSMAX time this quarter is 1390.80 hours (0.95 x 1464). The differences in OPSMAX performance reflect the complexity of local logistics and the frequency of extreme weather events. It is impractical to measure OPSMAX for each instrument or data stream. Data availability reported here refers to the average of the individual, continuous data streams that have been received by the Archive. Data not at the Archive are caused by downtime (scheduled or unplanned) of the individual instruments. Therefore, data availability is directly related to individual instrument uptime. Thus, the average percentage of data in the Archive represents the average percentage of the time (24 hours per day, 91 days for this quarter) the instruments were operating this quarter. Table 1 shows the accumulated maximum operation time (planned uptime), actual hours of operation, and variance (unplanned downtime) for April 1 - June 30, 2009, for the fixed sites. Because the AMF operates episodically, the AMF statistics are reported separately and are not included in the aggregate average with the fixed sites. The AMF statistics for this reporting period were not available at the time of this report. The third quarter comprises a total of 2,184 hours for the fixed sites. The average well exceeded our goal this quarter.« less
Xiao, Jianbo
2015-01-01
Segmenting visual scenes into distinct objects and surfaces is a fundamental visual function. To better understand the underlying neural mechanism, we investigated how neurons in the middle temporal cortex (MT) of macaque monkeys represent overlapping random-dot stimuli moving transparently in slightly different directions. It has been shown that the neuronal response elicited by two stimuli approximately follows the average of the responses elicited by the constituent stimulus components presented alone. In this scheme of response pooling, the ability to segment two simultaneously presented motion directions is limited by the width of the tuning curve to motion in a single direction. We found that, although the population-averaged neuronal tuning showed response averaging, subgroups of neurons showed distinct patterns of response tuning and were capable of representing component directions that were separated by a small angle—less than the tuning width to unidirectional stimuli. One group of neurons preferentially represented the component direction at a specific side of the bidirectional stimuli, weighting one stimulus component more strongly than the other. Another group of neurons pooled the component responses nonlinearly and showed two separate peaks in their tuning curves even when the average of the component responses was unimodal. We also show for the first time that the direction tuning of MT neurons evolved from initially representing the vector-averaged direction of slightly different stimuli to gradually representing the component directions. Our results reveal important neural processes underlying image segmentation and suggest that information about slightly different stimulus components is computed dynamically and distributed across neurons. SIGNIFICANCE STATEMENT Natural scenes often contain multiple entities. The ability to segment visual scenes into distinct objects and surfaces is fundamental to sensory processing and is crucial for generating the perception of our environment. Because cortical neurons are broadly tuned to a given visual feature, segmenting two stimuli that differ only slightly is a challenge for the visual system. In this study, we discovered that many neurons in the visual cortex are capable of representing individual components of slightly different stimuli by selectively and nonlinearly pooling the responses elicited by the stimulus components. We also show for the first time that the neural representation of individual stimulus components developed over a period of ∼70–100 ms, revealing a dynamic process of image segmentation. PMID:26658869
Time averaging, ageing and delay analysis of financial time series
NASA Astrophysics Data System (ADS)
Cherstvy, Andrey G.; Vinod, Deepak; Aghion, Erez; Chechkin, Aleksei V.; Metzler, Ralf
2017-06-01
We introduce three strategies for the analysis of financial time series based on time averaged observables. These comprise the time averaged mean squared displacement (MSD) as well as the ageing and delay time methods for varying fractions of the financial time series. We explore these concepts via statistical analysis of historic time series for several Dow Jones Industrial indices for the period from the 1960s to 2015. Remarkably, we discover a simple universal law for the delay time averaged MSD. The observed features of the financial time series dynamics agree well with our analytical results for the time averaged measurables for geometric Brownian motion, underlying the famed Black-Scholes-Merton model. The concepts we promote here are shown to be useful for financial data analysis and enable one to unveil new universal features of stock market dynamics.
Rock-weathering rates as functions of time
Colman, Steven M.
1981-01-01
The scarcity of documented numerical relations between rock weathering and time has led to a common assumption that rates of weathering are linear. This assumption has been strengthened by studies that have calculated long-term average rates. However, little theoretical or empirical evidence exists to support linear rates for most chemical-weathering processes, with the exception of congruent dissolution processes. The few previous studies of rock-weathering rates that contain quantitative documentation of the relation between chemical weathering and time suggest that the rates of most weathering processes decrease with time. Recent studies of weathering rinds on basaltic and andesitic stones in glacial deposits in the western United States also clearly demonstrate that rock-weathering processes slow with time. Some weathering processes appear to conform to exponential functions of time, such as the square-root time function for hydration of volcanic glass, which conforms to the theoretical predictions of diffusion kinetics. However, weathering of mineralogically heterogeneous rocks involves complex physical and chemical processes that generally can be expressed only empirically, commonly by way of logarithmic time functions. Incongruent dissolution and other weathering processes produce residues, which are commonly used as measures of weathering. These residues appear to slow movement of water to unaltered material and impede chemical transport away from it. If weathering residues impede weathering processes then rates of weathering and rates of residue production are inversely proportional to some function of the residue thickness. This results in simple mathematical analogs for weathering that imply nonlinear time functions. The rate of weathering becomes constant only when an equilibrium thickness of the residue is reached. Because weathering residues are relatively stable chemically, and because physical removal of residues below the ground surface is slight, many weathering features require considerable time to reach constant rates of change. For weathering rinds on volcanic stones in the western United States, this time is at least 0.5 my. ?? 1981.
Real time processor for array speckle interferometry
NASA Astrophysics Data System (ADS)
Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos
1989-02-01
The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.
Real time processor for array speckle interferometry
NASA Technical Reports Server (NTRS)
Chin, Gordon; Florez, Jose; Borelli, Renan; Fong, Wai; Miko, Joseph; Trujillo, Carlos
1989-01-01
The authors are constructing a real-time processor to acquire image frames, perform array flat-fielding, execute a 64 x 64 element two-dimensional complex FFT (fast Fourier transform) and average the power spectrum, all within the 25 ms coherence time for speckles at near-IR (infrared) wavelength. The processor will be a compact unit controlled by a PC with real-time display and data storage capability. This will provide the ability to optimize observations and obtain results on the telescope rather than waiting several weeks before the data can be analyzed and viewed with offline methods. The image acquisition and processing, design criteria, and processor architecture are described.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
A stochastic fractional dynamics model of space-time variability of rain
NASA Astrophysics Data System (ADS)
Kundu, Prasun K.; Travis, James E.
2013-09-01
varies in space and time in a highly irregular manner and is described naturally in terms of a stochastic process. A characteristic feature of rainfall statistics is that they depend strongly on the space-time scales over which rain data are averaged. A spectral model of precipitation has been developed based on a stochastic differential equation of fractional order for the point rain rate, which allows a concise description of the second moment statistics of rain at any prescribed space-time averaging scale. The model is thus capable of providing a unified description of the statistics of both radar and rain gauge data. The underlying dynamical equation can be expressed in terms of space-time derivatives of fractional orders that are adjusted together with other model parameters to fit the data. The form of the resulting spectrum gives the model adequate flexibility to capture the subtle interplay between the spatial and temporal scales of variability of rain but strongly constrains the predicted statistical behavior as a function of the averaging length and time scales. We test the model with radar and gauge data collected contemporaneously at the NASA TRMM ground validation sites located near Melbourne, Florida and on the Kwajalein Atoll, Marshall Islands in the tropical Pacific. We estimate the parameters by tuning them to fit the second moment statistics of radar data at the smaller spatiotemporal scales. The model predictions are then found to fit the second moment statistics of the gauge data reasonably well at these scales without any further adjustment.