Eliciting interval beliefs: An experimental study
Peeters, Ronald; Wolk, Leonard
2017-01-01
In this paper we study the interval scoring rule as a mechanism to elicit subjective beliefs under varying degrees of uncertainty. In our experiment, subjects forecast the termination time of a time series to be generated from a given but unknown stochastic process. Subjects gradually learn more about the underlying process over time and hence the true distribution over termination times. We conduct two treatments, one with a high and one with a low volatility process. We find that elicited intervals are better when subjects are facing a low volatility process. In this treatment, participants learn to position their intervals almost optimally over the course of the experiment. This is in contrast with the high volatility treatment, where subjects, over the course of the experiment, learn to optimize the location of their intervals but fail to provide the optimal length. PMID:28380020
Kim, Tae Kyung; Kim, Hyung Wook; Kim, Su Jin; Ha, Jong Kun; Jang, Hyung Ha; Hong, Young Mi; Park, Su Bum; Choi, Cheol Woong; Kang, Dae Hwan
2014-01-01
Background/Aims The quality of bowel preparation (QBP) is the important factor in performing a successful colonoscopy. Several factors influencing QBP have been reported; however, some factors, such as the optimal preparation-to-colonoscopy time interval, remain controversial. This study aimed to determine the factors influencing QBP and the optimal time interval for full-dose polyethylene glycol (PEG) preparation. Methods A total of 165 patients who underwent colonoscopy from June 2012 to August 2012 were prospectively evaluated. The QBP was assessed using the Ottawa Bowel Preparation Scale (Ottawa) score according to several factors influencing the QBP were analyzed. Results Colonoscopies with a time interval of 5 to 6 hours had the best Ottawa score in all parts of the colon. Patients with time intervals of 6 hours or less had the better QBP than those with time intervals of more than 6 hours (p=0.046). In the multivariate analysis, the time interval (odds ratio, 1.897; 95% confidence interval, 1.006 to 3.577; p=0.048) was the only significant contributor to a satisfactory bowel preparation. Conclusions The optimal time was 5 to 6 hours for the full-dose PEG method, and the time interval was the only significant contributor to a satisfactory bowel preparation. PMID:25368750
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
State transformations and Hamiltonian structures for optimal control in discrete systems
NASA Astrophysics Data System (ADS)
Sieniutycz, S.
2006-04-01
Preserving usual definition of Hamiltonian H as the scalar product of rates and generalized momenta we investigate two basic classes of discrete optimal control processes governed by the difference rather than differential equations for the state transformation. The first class, linear in the time interval θ, secures the constancy of optimal H and satisfies a discrete Hamilton-Jacobi equation. The second class, nonlinear in θ, does not assure the constancy of optimal H and satisfies only a relationship that may be regarded as an equation of Hamilton-Jacobi type. The basic question asked is if and when Hamilton's canonical structures emerge in optimal discrete systems. For a constrained discrete control, general optimization algorithms are derived that constitute powerful theoretical and computational tools when evaluating extremum properties of constrained physical systems. The mathematical basis is Bellman's method of dynamic programming (DP) and its extension in the form of the so-called Carathéodory-Boltyanski (CB) stage optimality criterion which allows a variation of the terminal state that is otherwise fixed in Bellman's method. For systems with unconstrained intervals of the holdup time θ two powerful optimization algorithms are obtained: an unconventional discrete algorithm with a constant H and its counterpart for models nonlinear in θ. We also present the time-interval-constrained extension of the second algorithm. The results are general; namely, one arrives at: discrete canonical equations of Hamilton, maximum principles, and (at the continuous limit of processes with free intervals of time) the classical Hamilton-Jacobi theory, along with basic results of variational calculus. A vast spectrum of applications and an example are briefly discussed with particular attention paid to models nonlinear in the time interval θ.
Modified dwell time optimization model and its applications in subaperture polishing.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-05-20
The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200 mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920 mm Zerodur paraboloid and Φ100 mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8 nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Optimal regulation in systems with stochastic time sampling
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1980-01-01
An optimal control theory that accounts for stochastic variable time sampling in a distributed microprocessor based flight control system is presented. The theory is developed by using a linear process model for the airplane dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved for the control law that minimizes the expected value of a quadratic cost function. The optimal cost obtained with a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained with a known and uniform information update interval.
Optimal control of lift/drag ratios on a rotating cylinder
NASA Technical Reports Server (NTRS)
Ou, Yuh-Roung; Burns, John A.
1992-01-01
We present the numerical solution to a problem of maximizing the lift to drag ratio by rotating a circular cylinder in a two-dimensional viscous incompressible flow. This problem is viewed as a test case for the newly developing theoretical and computational methods for control of fluid dynamic systems. We show that the time averaged lift to drag ratio for a fixed finite-time interval achieves its maximum value at an optimal rotation rate that depends on the time interval.
van Gelder, Berry M; Meijer, Albert; Bracke, Frank A
2008-09-01
We compared the calculated optimal V-V interval derived from intracardiac electrograms (IEGM) with the optimized V-V interval determined by invasive measurement of LVdP/dt(MAX). Thirty-two patients with heart failure (six females, ages 68 +/- 7.8 years) had a CRT device implanted. After implantation of the atrial, right and a left ventricular lead, the optimal V-V interval was calculated using the QuickOpt formula (St. Jude Medical, Sylmar, CA, USA) applied to the respective IEGM recordings (V-V(IEGM)), and also determined by invasive measurement of LVdP/dt(MAX) (V-V(dP/dt)). The optimal V-V(IEGM) and V-V(dP/dt) intervals were 52.7 +/- 18 ms and 24.0 +/- 33 ms, respectively (P = 0.017), without correlation between the two. The baseline LVdP/dt(MAX) was 748 +/- 191 mmHg/s. The mean value of LVdP/dt(MAX) at invasive optimization was 947 +/- 198 mmHg/s, and at the calculated optimal V-V(IEGM) interval 920 +/- 191 mmHg/s (P < 0.0001). In spite of this significant difference, there was a good correlation between both methods (R = 0.991, P < 0.0001). However, a similarly good correlation existed between the maximum value of LVdP/dt(MAX) and LVdP/dt(MAX) at a fixed V-V interval of 0 ms (R = 0.993, P < 0.0001), or LVdP/dt(MAX) at a randomly selected V-V interval between 0 and +80 ms (R = 0.991, P < 0.0001). Optimizing the V-V interval with the IEGM method does not yield better hemodynamic results than simultaneous BiV pacing. Although a good correlation between LVdP/dt(MAX) determined with V-V(IEGM) and V-V(dP/dt) can be constructed, there is no correlation with the optimal settings of V-V interval in the individual patient.
Information distribution in distributed microprocessor based flight control systems
NASA Technical Reports Server (NTRS)
Montgomery, R. C.; Lee, P. S.
1977-01-01
This paper presents an optimal control theory that accounts for variable time intervals in the information distribution to control effectors in a distributed microprocessor based flight control system. The theory is developed using a linear process model for the aircraft dynamics and the information distribution process is modeled as a variable time increment process where, at the time that information is supplied to the control effectors, the control effectors know the time of the next information update only in a stochastic sense. An optimal control problem is formulated and solved that provides the control law that minimizes the expected value of a quadratic cost function. An example is presented where the theory is applied to the control of the longitudinal motions of the F8-DFBW aircraft. Theoretical and simulation results indicate that, for the example problem, the optimal cost obtained using a variable time increment Markov information update process where the control effectors know only the past information update intervals and the Markov transition mechanism is almost identical to that obtained using a known uniform information update interval.
NASA Astrophysics Data System (ADS)
Muratore-Ginanneschi, Paolo
2005-05-01
Investment strategies in multiplicative Markovian market models with transaction costs are defined using growth optimal criteria. The optimal strategy is shown to consist in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon. The inclusion of financial derivatives in the models is also considered. All the results presented in this contributions were previously derived in collaboration with E. Aurell.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Fuel optimal maneuvers for spacecraft with fixed thrusters
NASA Technical Reports Server (NTRS)
Carter, T. C.
1982-01-01
Several mathematical models, including a minimum integral square criterion problem, were used for the qualitative investigation of fuel optimal maneuvers for spacecraft with fixed thrusters. The solutions consist of intervals of "full thrust" and "coast" indicating that thrusters do not need to be designed as "throttleable" for fuel optimal performance. For the primary model considered, singular solutions occur only if the optimal solution is "pure translation". "Time optimal" singular solutions can be found which consist of intervals of "coast" and "full thrust". The shape of the optimal fuel consumption curve as a function of flight time was found to depend on whether or not the initial state is in the region admitting singular solutions. Comparisons of fuel optimal maneuvers in deep space with those relative to a point in circular orbit indicate that qualitative differences in the solutions can occur. Computation of fuel consumption for certain "pure translation" cases indicates that considerable savings in fuel can result from the fuel optimal maneuvers.
Consideration of computer limitations in implementing on-line controls. M.S. Thesis
NASA Technical Reports Server (NTRS)
Roberts, G. K.
1976-01-01
A formal statement of the optimal control problem which includes the interval of dicretization as an optimization parameter, and extend this to include selection of a control algorithm as part of the optimization procedure, is formulated. The performance of the scalar linear system depends on the discretization interval. Discrete-time versions of the output feedback regulator and an optimal compensator, and the use of these results in presenting an example of a system for which fast partial-state-feedback control better minimizes a quadratic cost than either a full-state feedback control or a compensator, are developed.
Optimizing some 3-stage W-methods for the time integration of PDEs
NASA Astrophysics Data System (ADS)
Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.
2017-07-01
The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.
Morimoto, Akemi; Nagao, Shoji; Kogiku, Ai; Yamamoto, Kasumi; Miwa, Maiko; Wakahashi, Senn; Ichida, Kotaro; Sudo, Tamotsu; Yamaguchi, Satoshi; Fujiwara, Kiyoshi
2016-06-01
The purpose of this study is to investigate the clinical characteristics to determine the optimal timing of interval debulking surgery following neoadjuvant chemotherapy in patients with advanced epithelial ovarian cancer. We reviewed the charts of women with advanced epithelial ovarian cancer, fallopian tube cancer or primary peritoneal cancer who underwent interval debulking surgery following neoadjuvant chemotherapy at our cancer center from April 2006 to April 2014. There were 139 patients, including 91 with ovarian cancer [International Federation of Gynecology and Obstetrics (FIGO) Stage IIIc in 56 and IV in 35], two with fallopian tube cancers (FIGO Stage IV, both) and 46 with primary peritoneal cancer (FIGO Stage IIIc in 27 and IV in 19). After 3-6 cycles (median, 4 cycles) of platinum-based chemotherapy, interval debulking surgery was performed. Sixty-seven patients (48.2%) achieved complete resection of all macroscopic disease, while 72 did not. More patients with cancer antigen 125 levels ≤25.8 mg/dl at pre-interval debulking surgery achieved complete resection than those with higher cancer antigen 125 levels (84.7 vs. 21.3%; P< 0.0001). Patients with no ascites at pre-interval debulking surgery also achieved a higher complete resection rate (63.5 vs. 34.1%; P< 0.0001). Moreover, most patients (86.7%) with cancer antigen 125 levels ≤25.8 mg/dl and no ascites at pre-interval debulking surgery achieved complete resection. A low cancer antigen 125 level of ≤25.8 mg/dl and the absence of ascites at pre-interval debulking surgery are major predictive factors for complete resection during interval debulking surgery and present useful criteria to determine the optimal timing of interval debulking surgery. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Santana, Victor M; Alday, Josu G; Lee, HyoHyeMi; Allen, Katherine A; Marrs, Rob H
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions.
van Oostrum, Jeroen M; Van Houdenhoven, Mark; Vrielink, Manon M J; Klein, Jan; Hans, Erwin W; Klimek, Markus; Wullink, Gerhard; Steyerberg, Ewout W; Kazemier, Geert
2008-11-01
Hospitals that perform emergency surgery during the night (e.g., from 11:00 pm to 7:30 am) face decisions on optimal operating room (OR) staffing. Emergency patients need to be operated on within a predefined safety window to decrease morbidity and improve their chances of full recovery. We developed a process to determine the optimal OR team composition during the night, such that staffing costs are minimized, while providing adequate resources to start surgery within the safety interval. A discrete event simulation in combination with modeling of safety intervals was applied. Emergency surgery was allowed to be postponed safely. The model was tested using data from the main OR of Erasmus University Medical Center (Erasmus MC). Two outcome measures were calculated: violation of safety intervals and frequency with which OR and anesthesia nurses were called in from home. We used the following input data from Erasmus MC to estimate distributions of all relevant parameters in our model: arrival times of emergency patients, durations of surgical cases, length of stay in the postanesthesia care unit, and transportation times. In addition, surgeons and OR staff of Erasmus MC specified safety intervals. Reducing in-house team members from 9 to 5 increased the fraction of patients treated too late by 2.5% as compared to the baseline scenario. Substantially more OR and anesthesia nurses were called in from home when needed. The use of safety intervals benefits OR management during nights. Modeling of safety intervals substantially influences the number of emergency patients treated on time. Our case study showed that by modeling safety intervals and applying computer simulation, an OR can reduce its staff on call without jeopardizing patient safety.
Sui, Yuanyuan; Ou, Yang; Yan, Baixing; Xu, Xiaohong; Rousseau, Alain N; Zhang, Yu
2016-01-01
Micro-basin tillage is a soil and water conservation practice that requires building individual earth blocks along furrows. In this study, plot experiments were conducted to assess the efficiency of micro-basin tillage on sloping croplands between 2012 and 2013 (5°and 7°). The conceptual, optimal, block interval model was used to design micro-basins which are meant to capture the maximum amount of water per unit area. Results indicated that when compared to the up-down slope tillage, micro-basin tillage could increase soil water content and maize yield by about 45% and 17%, and reduce runoff, sediment and nutrients loads by about 63%, 96% and 86%, respectively. Meanwhile, micro-basin tillage could reduce the peak runoff rates and delay the initial runoff-yielding time. In addition, micro-basin tillage with the optimal block interval proved to be the best one among all treatments with different intervals. Compared with treatments of other block intervals, the optimal block interval treatments increased soil moisture by around 10% and reduced runoff rate by around 15%. In general, micro-basin tillage with optimal block interval represents an effective soil and water conservation practice for sloping farmland of the black soil region.
Sui, Yuanyuan; Ou, Yang; Yan, Baixing; Xu, Xiaohong; Rousseau, Alain N.; Zhang, Yu
2016-01-01
Micro-basin tillage is a soil and water conservation practice that requires building individual earth blocks along furrows. In this study, plot experiments were conducted to assess the efficiency of micro-basin tillage on sloping croplands between 2012 and 2013 (5°and 7°). The conceptual, optimal, block interval model was used to design micro-basins which are meant to capture the maximum amount of water per unit area. Results indicated that when compared to the up-down slope tillage, micro-basin tillage could increase soil water content and maize yield by about 45% and 17%, and reduce runoff, sediment and nutrients loads by about 63%, 96% and 86%, respectively. Meanwhile, micro-basin tillage could reduce the peak runoff rates and delay the initial runoff-yielding time. In addition, micro-basin tillage with the optimal block interval proved to be the best one among all treatments with different intervals. Compared with treatments of other block intervals, the optimal block interval treatments increased soil moisture by around 10% and reduced runoff rate by around 15%. In general, micro-basin tillage with optimal block interval represents an effective soil and water conservation practice for sloping farmland of the black soil region. PMID:27031339
Assessing and minimizing contamination in time of flight based validation data
NASA Astrophysics Data System (ADS)
Lennox, Kristin P.; Rosenfield, Paul; Blair, Brenton; Kaplan, Alan; Ruz, Jaime; Glenn, Andrew; Wurtz, Ronald
2017-10-01
Time of flight experiments are the gold standard method for generating labeled training and testing data for the neutron/gamma pulse shape discrimination problem. As the popularity of supervised classification methods increases in this field, there will also be increasing reliance on time of flight data for algorithm development and evaluation. However, time of flight experiments are subject to various sources of contamination that lead to neutron and gamma pulses being mislabeled. Such labeling errors have a detrimental effect on classification algorithm training and testing, and should therefore be minimized. This paper presents a method for identifying minimally contaminated data sets from time of flight experiments and estimating the residual contamination rate. This method leverages statistical models describing neutron and gamma travel time distributions and is easily implemented using existing statistical software. The method produces a set of optimal intervals that balance the trade-off between interval size and nuisance particle contamination, and its use is demonstrated on a time of flight data set for Cf-252. The particular properties of the optimal intervals for the demonstration data are explored in detail.
Viana, Duarte S; Santamaría, Luis; Figuerola, Jordi
2016-02-01
Propagule retention time is a key factor in determining propagule dispersal distance and the shape of "seed shadows". Propagules dispersed by animal vectors are either ingested and retained in the gut until defecation or attached externally to the body until detachment. Retention time is a continuous variable, but it is commonly measured at discrete time points, according to pre-established sampling time-intervals. Although parametric continuous distributions have been widely fitted to these interval-censored data, the performance of different fitting methods has not been evaluated. To investigate the performance of five different fitting methods, we fitted parametric probability distributions to typical discretized retention-time data with known distribution using as data-points either the lower, mid or upper bounds of sampling intervals, as well as the cumulative distribution of observed values (using either maximum likelihood or non-linear least squares for parameter estimation); then compared the estimated and original distributions to assess the accuracy of each method. We also assessed the robustness of these methods to variations in the sampling procedure (sample size and length of sampling time-intervals). Fittings to the cumulative distribution performed better for all types of parametric distributions (lognormal, gamma and Weibull distributions) and were more robust to variations in sample size and sampling time-intervals. These estimated distributions had negligible deviations of up to 0.045 in cumulative probability of retention times (according to the Kolmogorov-Smirnov statistic) in relation to original distributions from which propagule retention time was simulated, supporting the overall accuracy of this fitting method. In contrast, fitting the sampling-interval bounds resulted in greater deviations that ranged from 0.058 to 0.273 in cumulative probability of retention times, which may introduce considerable biases in parameter estimates. We recommend the use of cumulative probability to fit parametric probability distributions to propagule retention time, specifically using maximum likelihood for parameter estimation. Furthermore, the experimental design for an optimal characterization of unimodal propagule retention time should contemplate at least 500 recovered propagules and sampling time-intervals not larger than the time peak of propagule retrieval, except in the tail of the distribution where broader sampling time-intervals may also produce accurate fits.
Santana, Victor M.; Alday, Josu G.; Lee, HyoHyeMi; Allen, Katherine A.; Marrs, Rob H.
2016-01-01
A present challenge in fire ecology is to optimize management techniques so that ecological services are maximized and C emissions minimized. Here, we modeled the effects of different prescribed-burning rotation intervals and wildfires on carbon emissions (present and future) in British moorlands. Biomass-accumulation curves from four Calluna-dominated ecosystems along a north-south gradient in Great Britain were calculated and used within a matrix-model based on Markov Chains to calculate above-ground biomass-loads and annual C emissions under different prescribed-burning rotation intervals. Additionally, we assessed the interaction of these parameters with a decreasing wildfire return intervals. We observed that litter accumulation patterns varied between sites. Northern sites (colder and wetter) accumulated lower amounts of litter with time than southern sites (hotter and drier). The accumulation patterns of the living vegetation dominated by Calluna were determined by site-specific conditions. The optimal prescribed-burning rotation interval for minimizing annual carbon emissions also differed between sites: the optimal rotation interval for northern sites was between 30 and 50 years, whereas for southern sites a hump-backed relationship was found with the optimal interval either between 8 to 10 years or between 30 to 50 years. Increasing wildfire frequency interacted with prescribed-burning rotation intervals by both increasing C emissions and modifying the optimum prescribed-burning interval for minimum C emission. This highlights the importance of studying site-specific biomass accumulation patterns with respect to environmental conditions for identifying suitable fire-rotation intervals to minimize C emissions. PMID:27880840
Study on transfer optimization of urban rail transit and conventional public transport
NASA Astrophysics Data System (ADS)
Wang, Jie; Sun, Quan Xin; Mao, Bao Hua
2018-04-01
This paper mainly studies the time optimization of feeder connection between rail transit and conventional bus in a shopping center. In order to achieve the goal of connecting rail transportation effectively and optimizing the convergence between the two transportations, the things had to be done are optimizing the departure intervals, shorting the passenger transfer time and improving the service level of public transit. Based on the goal that has the minimum of total waiting time of passengers and the number of start of classes, establish the optimizing model of bus connecting of departure time. This model has some constrains such as transfer time, load factor, and the convergence of public transportation grid spacing. It solves the problems by using genetic algorithms.
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
Seo, Eun Hee; Kim, Tae Oh; Park, Min Jae; Joo, Hee Rin; Heo, Nae Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo
2012-03-01
Several factors influence bowel preparation quality. Recent studies have indicated that the time interval between bowel preparation and the start of colonoscopy is also important in determining bowel preparation quality. To evaluate the influence of the preparation-to-colonoscopy (PC) interval (the interval of time between the last polyethylene glycol dose ingestion and the start of the colonoscopy) on bowel preparation quality in the split-dose method for colonoscopy. Prospective observational study. University medical center. A total of 366 consecutive outpatients undergoing colonoscopy. Split-dose bowel preparation and colonoscopy. The quality of bowel preparation was assessed by using the Ottawa Bowel Preparation Scale according to the PC interval, and other factors that might influence bowel preparation quality were analyzed. Colonoscopies with a PC interval of 3 to 5 hours had the best bowel preparation quality score in the whole, right, mid, and rectosigmoid colon according to the Ottawa Bowel Preparation Scale. In multivariate analysis, the PC interval (odds ratio [OR] 1.85; 95% CI, 1.18-2.86), the amount of PEG ingested (OR 4.34; 95% CI, 1.08-16.66), and compliance with diet instructions (OR 2.22l 95% CI, 1.33-3.70) were significant contributors to satisfactory bowel preparation. Nonrandomized controlled, single-center trial. The optimal time interval between the last dose of the agent and the start of colonoscopy is one of the important factors to determine satisfactory bowel preparation quality in split-dose polyethylene glycol bowel preparation. Copyright © 2012 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-10-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.
Optimizing preventive maintenance policy: A data-driven application for a light rail braking system
Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel
2017-01-01
This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions. PMID:29278245
Tang, Zhongwen
2015-01-01
An analytical way to compute predictive probability of success (PPOS) together with credible interval at interim analysis (IA) is developed for big clinical trials with time-to-event endpoints. The method takes account of the fixed data up to IA, the amount of uncertainty in future data, and uncertainty about parameters. Predictive power is a special type of PPOS. The result is confirmed by simulation. An optimal design is proposed by finding optimal combination of analysis time and futility cutoff based on some PPOS criteria.
NASA Astrophysics Data System (ADS)
Vu, Duy-Duc; Monies, Frédéric; Rubio, Walter
2018-05-01
A large number of studies, based on 3-axis end milling of free-form surfaces, seek to optimize tool path planning. Approaches try to optimize the machining time by reducing the total tool path length while respecting the criterion of the maximum scallop height. Theoretically, the tool path trajectories that remove the most material follow the directions in which the machined width is the largest. The free-form surface is often considered as a single machining area. Therefore, the optimization on the entire surface is limited. Indeed, it is difficult to define tool trajectories with optimal feed directions which generate largest machined widths. Another limiting point of previous approaches for effectively reduce machining time is the inadequate choice of the tool. Researchers use generally a spherical tool on the entire surface. However, the gains proposed by these different methods developed with these tools lead to relatively small time savings. Therefore, this study proposes a new method, using toroidal milling tools, for generating toolpaths in different regions on the machining surface. The surface is divided into several regions based on machining intervals. These intervals ensure that the effective radius of the tool, at each cutter-contact points on the surface, is always greater than the radius of the tool in an optimized feed direction. A parallel plane strategy is then used on the sub-surfaces with an optimal specific feed direction for each sub-surface. This method allows one to mill the entire surface with efficiency greater than with the use of a spherical tool. The proposed method is calculated and modeled using Maple software to find optimal regions and feed directions in each region. This new method is tested on a free-form surface. A comparison is made with a spherical cutter to show the significant gains obtained with a toroidal milling cutter. Comparisons with CAM software and experimental validations are also done. The results show the efficiency of the method.
The right time to learn: mechanisms and optimization of spaced learning
Smolen, Paul; Zhang, Yili; Byrne, John H.
2016-01-01
For many types of learning, spaced training, which involves repeated long inter-trial intervals, leads to more robust memory formation than does massed training, which involves short or no intervals. Several cognitive theories have been proposed to explain this superiority, but only recently have data begun to delineate the underlying cellular and molecular mechanisms of spaced training, and we review these theories and data here. Computational models of the implicated signalling cascades have predicted that spaced training with irregular inter-trial intervals can enhance learning. This strategy of using models to predict optimal spaced training protocols, combined with pharmacotherapy, suggests novel ways to rescue impaired synaptic plasticity and learning. PMID:26806627
Feeding Intervals in Premature Infants ≤1750 g: An Integrative Review.
Binchy, Áine; Moore, Zena; Patton, Declan
2018-06-01
The timely establishment of enteral feeds and a reduction in the number of feeding interruptions are key to achieving optimal nutrition in premature infants. Nutritional guidelines vary widely regarding feeding regimens and there is not a widely accepted consensus on the optimal feeding interval. To critically examine the evidence to determine whether there is a relationship to feeding intervals and feeding outcomes in premature infants. A systematic review of the literature in the following databases: PubMed, CINAHL, Embase and the Cochrane Library. The search strategy used the terms infant premature, low birth weight, enteral feeding, feed tolerance and feed intervals. Search results yielded 10 studies involving 1269 infants (birth weight ≤1750 g). No significant differences in feed intolerance, growth, or incidence of necrotizing enterocolitis were observed. Evidence suggests that infants fed at 2 hourly intervals reached full feeds faster than at 3 hourly intervals, had fewer days on parenteral nutrition, and fewer days in which feedings were withheld. Decrease in the volume of gastric residuals and feeding interruptions were observed in the infants fed at 3 hourly intervals than those who were continuously fed. Reducing the feed interval from 3 to 2 hourly increases nurse workload, yet may improve feeding outcomes by reducing the time to achieve full enteral feeding. Studies varied greatly in the definition and management of feeding intolerance and in how outcomes were measured, analyzed, and reported. The term "intermittent" is used widely but can refer to a 2 or 3 hourly interval.
NASA Astrophysics Data System (ADS)
Trifonenkov, A. V.; Trifonenkov, V. P.
2017-01-01
This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.
Timescale- and Sensory Modality-Dependency of the Central Tendency of Time Perception.
Murai, Yuki; Yotsumoto, Yuko
2016-01-01
When individuals are asked to reproduce intervals of stimuli that are intermixedly presented at various times, longer intervals are often underestimated and shorter intervals overestimated. This phenomenon may be attributed to the central tendency of time perception, and suggests that our brain optimally encodes a stimulus interval based on current stimulus input and prior knowledge of the distribution of stimulus intervals. Two distinct systems are thought to be recruited in the perception of sub- and supra-second intervals. Sub-second timing is subject to local sensory processing, whereas supra-second timing depends on more centralized mechanisms. To clarify the factors that influence time perception, the present study investigated how both sensory modality and timescale affect the central tendency. In Experiment 1, participants were asked to reproduce sub- or supra-second intervals, defined by visual or auditory stimuli. In the sub-second range, the magnitude of the central tendency was significantly larger for visual intervals compared to auditory intervals, while visual and auditory intervals exhibited a correlated and comparable central tendency in the supra-second range. In Experiment 2, the ability to discriminate sub-second intervals in the reproduction task was controlled across modalities by using an interval discrimination task. Even when the ability to discriminate intervals was controlled, visual intervals exhibited a larger central tendency than auditory intervals in the sub-second range. In addition, the magnitude of the central tendency for visual and auditory sub-second intervals was significantly correlated. These results suggest that a common modality-independent mechanism is responsible for the supra-second central tendency, and that both the modality-dependent and modality-independent components of the timing system contribute to the central tendency in the sub-second range.
Choudhuri, Indrajit; MacCarter, Dean; Shaw, Rachael; Anderson, Steve; St Cyr, John; Niazi, Imran
2014-11-01
One-third of eligible patients fail to respond to cardiac resynchronization therapy (CRT). Current methods to "optimize" the atrio-ventricular (A-V) interval are performed at rest, which may limit its efficacy during daily activities. We hypothesized that low-intensity cardiopulmonary exercise testing (CPX) could identify the most favorable physiologic combination of specific gas exchange parameters reflecting pulmonary blood flow or cardiac output, stroke volume, and left atrial pressure to guide determination of the optimal A-V interval. We assessed relative feasibility of determining the optimal A-V interval by three methods in 17 patients who underwent optimization of CRT: (1) resting echocardiographic optimization (the Ritter method), (2) resting electrical optimization (intrinsic A-V interval and QRS duration), and (3) during low-intensity, steady-state CPX. Five sequential, incremental A-V intervals were programmed in each method. Assessment of cardiopulmonary stability and potential influence on the CPX-based method were assessed. CPX and determination of a physiological optimal A-V interval was successfully completed in 94.1% of patients, slightly higher than the resting echo-based approach (88.2%). There was a wide variation in the optimal A-V delay determined by each method. There was no observed cardiopulmonary instability or impact of the implant procedure that affected determination of the CPX-based optimized A-V interval. Determining optimized A-V intervals by CPX is feasible. Proposed mechanisms explaining this finding and long-term impact require further study. ©2014 Wiley Periodicals, Inc.
Abe, Toshikazu; Tokuda, Yasuharu; Cook, E Francis
2011-01-01
Optimal acceptable time intervals from collapse to bystander cardiopulmonary resuscitation (CPR) for neurologically favorable outcome among adults with witnessed out-of-hospital cardiopulmonary arrest (CPA) have been unclear. Our aim was to assess the optimal acceptable thresholds of the time intervals of CPR for neurologically favorable outcome and survival using a recursive partitioning model. From January 1, 2005 through December 31, 2009, we conducted a prospective population-based observational study across Japan involving consecutive out-of-hospital CPA patients (N = 69,648) who received a witnessed bystander CPR. Of 69,648 patients, 34,605 were assigned to the derivation data set and 35,043 to the validation data set. Time factors associated with better outcomes: the better outcomes were survival and neurologically favorable outcome at one month, defined as category one (good cerebral performance) or two (moderate cerebral disability) of the cerebral performance categories. Based on the recursive partitioning model from the derivation dataset (n = 34,605) to predict the neurologically favorable outcome at one month, 5 min threshold was the acceptable time interval from collapse to CPR initiation; 11 min from collapse to ambulance arrival; 18 min from collapse to return of spontaneous circulation (ROSC); and 19 min from collapse to hospital arrival. Among the validation dataset (n = 35,043), 209/2,292 (9.1%) in all patients with the acceptable time intervals and 1,388/2,706 (52.1%) in the subgroup with the acceptable time intervals and pre-hospital ROSC showed neurologically favorable outcome. Initiation of CPR should be within 5 min for obtaining neurologically favorable outcome among adults with witnessed out-of-hospital CPA. Patients with the acceptable time intervals of bystander CPR and pre-hospital ROSC within 18 min could have 50% chance of neurologically favorable outcome.
Optimal estimation of suspended-sediment concentrations in streams
Holtschlag, D.J.
2001-01-01
Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azunre, P.
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
Practical synchronization on complex dynamical networks via optimal pinning control
NASA Astrophysics Data System (ADS)
Li, Kezan; Sun, Weigang; Small, Michael; Fu, Xinchu
2015-07-01
We consider practical synchronization on complex dynamical networks under linear feedback control designed by optimal control theory. The control goal is to minimize global synchronization error and control strength over a given finite time interval, and synchronization error at terminal time. By utilizing the Pontryagin's minimum principle, and based on a general complex dynamical network, we obtain an optimal system to achieve the control goal. The result is verified by performing some numerical simulations on Star networks, Watts-Strogatz networks, and Barabási-Albert networks. Moreover, by combining optimal control and traditional pinning control, we propose an optimal pinning control strategy which depends on the network's topological structure. Obtained results show that optimal pinning control is very effective for synchronization control in real applications.
Optimal number of stimulation contacts for coordinated reset neuromodulation
Lysyansky, Borys; Popovych, Oleksandr V.; Tass, Peter A.
2013-01-01
In this computational study we investigate coordinated reset (CR) neuromodulation designed for an effective control of synchronization by multi-site stimulation of neuronal target populations. This method was suggested to effectively counteract pathological neuronal synchrony characteristic for several neurological disorders. We study how many stimulation sites are required for optimal CR-induced desynchronization. We found that a moderate increase of the number of stimulation sites may significantly prolong the post-stimulation desynchronized transient after the stimulation is completely switched off. This can, in turn, reduce the amount of the administered stimulation current for the intermittent ON–OFF CR stimulation protocol, where time intervals with stimulation ON are recurrently followed by time intervals with stimulation OFF. In addition, we found that the optimal number of stimulation sites essentially depends on how strongly the administered current decays within the neuronal tissue with increasing distance from the stimulation site. In particular, for a broad spatial stimulation profile, i.e., for a weak spatial decay rate of the stimulation current, CR stimulation can optimally be delivered via a small number of stimulation sites. Our findings may contribute to an optimization of therapeutic applications of CR neuromodulation. PMID:23885239
Bae, Jong-Myon; Shin, Sang Yop; Kim, Eun Hee
2015-01-01
Purpose This retrospective cohort study was conducted to estimate the optimal interval for gastric cancer screening in Korean adults with initial negative screening results. Materials and Methods This study consisted of voluntary Korean screenees aged 40 to 69 years who underwent subsequent screening gastroscopies after testing negative in the baseline screening performed between January 2007 and December 2011. A new case was defined as the presence of gastric cancer cells in biopsy specimens obtained upon gastroscopy. The follow-up periods were calculated during the months between the date of baseline screening gastroscopy and positive findings upon subsequent screenings, stratified by sex and age group. The mean sojourn time (MST) for determining the screening interval was estimated using the prevalence/incidence ratio. Results Of the 293,520 voluntary screenees for the gastric cancer screening program, 91,850 (31.29%) underwent subsequent screening gastroscopies between January 2007 and December 2011. The MSTs in men and women were 21.67 months (95% confidence intervals [CI], 17.64 to 26.88 months) and 15.14 months (95% CI, 9.44 to 25.85 months), respectively. Conclusion These findings suggest that the optimal interval for subsequent gastric screening in both men and women is 24 months, supporting the 2-year interval recommended by the nationwide gastric cancer screening program. PMID:25687874
Optimization of the Reconstruction Interval in Neurovascular 4D-CTA Imaging
Hoogenboom, T.C.H.; van Beurden, R.M.J.; van Teylingen, B.; Schenk, B.; Willems, P.W.A.
2012-01-01
Summary Time resolved whole brain CT angiography (4D-CTA) is a novel imaging technology providing information regarding blood flow. One of the factors that influence the diagnostic value of this examination is the temporal resolution, which is affected by the gantry rotation speed during acquisition and the reconstruction interval during post-processing. Post-processing determines the time spacing between two reconstructed volumes and, unlike rotation speed, does not affect radiation burden. The data sets of six patients who underwent a cranial 4D-CTA were used for this study. Raw data was acquired using a 320-slice scanner with a rotation speed of 2 Hz. The arterial to venous passage of an intravenous contrast bolus was captured during a 15 s continuous scan. The raw data was reconstructed using four different reconstruction-intervals: 0.2, 0.3, 0.5 and 1.0 s. The results were rated by two observers using a standardized score sheet. The appearance of each lesion was rated correctly in all readings. Scoring for quality of temporal resolution revealed a stepwise improvement from the 1.0 s interval to the 0.3 s interval, while no discernable improvement was noted between the 0.3 s and 0.2 s interval. An increase in temporal resolution may improve the diagnostic quality of cranial 4D-CTA. Using a rotation speed of 0.5 s, the optimal reconstruction interval appears to be 0.3 s, beyond which, changes can no longer be discerned. PMID:23217631
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse.
Liang, Xinshu; Gao, Yinan; Zhang, Xiaoying; Tian, Yongqiang; Zhang, Zhenxian; Gao, Lihong
2014-01-01
Inappropriate and excessive irrigation and fertilization have led to the predominant decline of crop yields, and water and fertilizer use efficiency in intensive vegetable production systems in China. For many vegetables, fertigation can be applied daily according to the actual water and nutrient requirement of crops. A greenhouse study was therefore conducted to investigate the effect of daily fertigation on migration of water and salt in soil, and root growth and fruit yield of cucumber. The treatments included conventional interval fertigation, optimal interval fertigation and optimal daily fertigation. Generally, although soil under the treatment optimal interval fertigation received much lower fertilizers than soil under conventional interval fertigation, the treatment optimal interval fertigation did not statistically decrease the economic yield and fruit nutrition quality of cucumber when compare to conventional interval fertigation. In addition, the treatment optimal interval fertigation effectively avoided inorganic nitrogen accumulation in soil and significantly (P<0.05) increased the partial factor productivity of applied nitrogen by 88% and 209% in the early-spring and autumn-winter seasons, respectively, when compared to conventional interval fertigation. Although soils under the treatments optimal interval fertigation and optimal daily fertigation received the same amount of fertilizers, the treatment optimal daily fertigation maintained the relatively stable water, electrical conductivity and mineral nitrogen levels in surface soils, promoted fine root (<1.5 mm diameter) growth of cucumber, and eventually increased cucumber economic yield by 6.2% and 8.3% and partial factor productivity of applied nitrogen by 55% and 75% in the early-spring and autumn-winter seasons, respectively, when compared to the treatment optimal interval fertigation. These results suggested that optimal daily fertigation is a beneficial practice for improving crop yield and the water and fertilizers use efficiency in solar greenhouse. PMID:24475204
Piecewise linear approximation for hereditary control problems
NASA Technical Reports Server (NTRS)
Propst, Georg
1987-01-01
Finite dimensional approximations are presented for linear retarded functional differential equations by use of discontinuous piecewise linear functions. The approximation scheme is applied to optimal control problems when a quadratic cost integral has to be minimized subject to the controlled retarded system. It is shown that the approximate optimal feedback operators converge to the true ones both in case the cost integral ranges over a finite time interval as well as in the case it ranges over an infinite time interval. The arguments in the latter case rely on the fact that the piecewise linear approximations to stable systems are stable in a uniform sense. This feature is established using a vector-component stability criterion in the state space R(n) x L(2) and the favorable eigenvalue behavior of the piecewise linear approximations.
NASA Astrophysics Data System (ADS)
Park, Ju H.; Kwon, O. M.
In the letter, the global asymptotic stability of bidirectional associative memory (BAM) neural networks with delays is investigated. The delay is assumed to be time-varying and belongs to a given interval. A novel stability criterion for the stability is presented based on the Lyapunov method. The criterion is represented in terms of linear matrix inequality (LMI), which can be solved easily by various optimization algorithms. Two numerical examples are illustrated to show the effectiveness of our new result.
Kim, H J; Kwon, S B; Whang, K U; Lee, J S; Park, Y L; Lee, S Y
2018-02-01
Hyaluronidase injection is a commonly performed treatment for overcorrection or misplacement of hyaluronic acid (HA) filler. Many patients often wants the HA filler reinjection after the use of hyaluronidase, though the optimal timing of reinjection of HA filler still remains unknown. To provide the optimal time interval between hyaluronidase injections and HA filler reinjections. 6 Sprague-Dawley rats were injected with single monophasic HA filler. 1 week after injection, the injected sites were treated with hyaluronidase. Then, HA fillers were reinjected sequentially with differing time intervals from 30 minutes to 14 days. 1 hour after the reinjection of the last HA filler, all injection sites were excised for histologic evaluation. 3 hours after reinjection of HA filler, the appearance of filler material became evident again, retaining its shape and volume. 6 hours after reinjection, the filler materials restored almost its original volume and there were no significant differences from the positive control. Our data suggest that the hyaluronidase loses its effect in dermis and subcutaneous tissue within 3-6 hours after the injection and successful engraftment of reinjected HA filler can be accomplished 6 hours after the injection.
Li, Zukui; Ding, Ran; Floudas, Christodoulos A.
2011-01-01
Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263
NASA Astrophysics Data System (ADS)
Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza
2018-03-01
This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.
The economics and timing of preoperative antibiotics for orthopaedic procedures.
Norman, B A; Bartsch, S M; Duggan, A P; Rodrigues, M B; Stuckey, D R; Chen, A F; Lee, B Y
2013-12-01
The efficacy of antibiotics in preventing surgical site infections (SSIs) depends on the timing of administration relative to the start of surgery. However, currently, both the timing of and recommendations for administration vary substantially. To determine how the economic value from the hospital perspective of preoperative antibiotics varies with the timing of administration for orthopaedic procedures. Computational decision and operational models were developed from the hospital perspective. Baseline analyses looked at current timing of administration, while additional analyses varied the timing of administration, compliance with recommended guidelines, and the goal time-interval. Beginning antibiotic administration within 0-30 min prior to surgery resulted in the lowest costs and SSIs. Operationally, linking to a pre-surgical activity, administering antibiotics prior to incision but after anaesthesia-ready time was optimal, as 92.1% of the time, antibiotics were administered in the optimal time-interval (0-30 min prior to incision). Improving administration compliance from 80% to 90% for this pre-surgical activity results in cost savings of $447 per year for a hospital performing 100 orthopaedic operations a year. This study quantifies the potential cost-savings when antibiotic administration timing is improved, which in turn can guide the amount hospitals should invest to address this issue.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
On the Parameterized Complexity of Some Optimization Problems Related to Multiple-Interval Graphs
NASA Astrophysics Data System (ADS)
Jiang, Minghui
We show that for any constant t ≥ 2, K -Independent Set and K-Dominating Set in t-track interval graphs are W[1]-hard. This settles an open question recently raised by Fellows, Hermelin, Rosamond, and Vialette. We also give an FPT algorithm for K-Clique in t-interval graphs, parameterized by both k and t, with running time max { t O(k), 2 O(klogk) } ·poly(n), where n is the number of vertices in the graph. This slightly improves the previous FPT algorithm by Fellows, Hermelin, Rosamond, and Vialette. Finally, we use the W[1]-hardness of K-Independent Set in t-track interval graphs to obtain the first parameterized intractability result for a recent bioinformatics problem called Maximal Strip Recovery (MSR). We show that MSR-d is W[1]-hard for any constant d ≥ 4 when the parameter is either the total length of the strips, or the total number of adjacencies in the strips, or the number of strips in the optimal solution.
Azunre, P.
2016-09-21
Here in this paper, two novel techniques for bounding the solutions of parametric weakly coupled second-order semilinear parabolic partial differential equations are developed. The first provides a theorem to construct interval bounds, while the second provides a theorem to construct lower bounds convex and upper bounds concave in the parameter. The convex/concave bounds can be significantly tighter than the interval bounds because of the wrapping effect suffered by interval analysis in dynamical systems. Both types of bounds are computationally cheap to construct, requiring solving auxiliary systems twice and four times larger than the original system, respectively. An illustrative numerical examplemore » of bound construction and use for deterministic global optimization within a simple serial branch-and-bound algorithm, implemented numerically using interval arithmetic and a generalization of McCormick's relaxation technique, is presented. Finally, problems within the important class of reaction-diffusion systems may be optimized with these tools.« less
A parallel optimization method for product configuration and supplier selection based on interval
NASA Astrophysics Data System (ADS)
Zheng, Jian; Zhang, Meng; Li, Guoxi
2017-06-01
In the process of design and manufacturing, product configuration is an important way of product development, and supplier selection is an essential component of supply chain management. To reduce the risk of procurement and maximize the profits of enterprises, this study proposes to combine the product configuration and supplier selection, and express the multiple uncertainties as interval numbers. An integrated optimization model of interval product configuration and supplier selection was established, and NSGA-II was put forward to locate the Pareto-optimal solutions to the interval multiobjective optimization model.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen's interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen's relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions.
Luo, Yuan; Szolovits, Peter
2016-01-01
In natural language processing, stand-off annotation uses the starting and ending positions of an annotation to anchor it to the text and stores the annotation content separately from the text. We address the fundamental problem of efficiently storing stand-off annotations when applying natural language processing on narrative clinical notes in electronic medical records (EMRs) and efficiently retrieving such annotations that satisfy position constraints. Efficient storage and retrieval of stand-off annotations can facilitate tasks such as mapping unstructured text to electronic medical record ontologies. We first formulate this problem into the interval query problem, for which optimal query/update time is in general logarithm. We next perform a tight time complexity analysis on the basic interval tree query algorithm and show its nonoptimality when being applied to a collection of 13 query types from Allen’s interval algebra. We then study two closely related state-of-the-art interval query algorithms, proposed query reformulations, and augmentations to the second algorithm. Our proposed algorithm achieves logarithmic time stabbing-max query time complexity and solves the stabbing-interval query tasks on all of Allen’s relations in logarithmic time, attaining the theoretic lower bound. Updating time is kept logarithmic and the space requirement is kept linear at the same time. We also discuss interval management in external memory models and higher dimensions. PMID:27478379
2017-01-05
AFRL-AFOSR-JP-TR-2017-0002 Advanced Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure Manabu...Computational Methods for Optimization of Non-Periodic Inspection Intervals for Aging Infrastructure 5a. CONTRACT NUMBER 5b. GRANT NUMBER FA2386...UNLIMITED: PB Public Release 13. SUPPLEMENTARY NOTES 14. ABSTRACT This report for the project titled ’Advanced Computational Methods for Optimization of
Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah
Tian, Le; Latré, Steven
2017-01-01
IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks. PMID:28677617
Real-Time Station Grouping under Dynamic Traffic for IEEE 802.11ah.
Tian, Le; Khorov, Evgeny; Latré, Steven; Famaey, Jeroen
2017-07-04
IEEE 802.11ah, marketed as Wi-Fi HaLow, extends Wi-Fi to the sub-1 GHz spectrum. Through a number of physical layer (PHY) and media access control (MAC) optimizations, it aims to bring greatly increased range, energy-efficiency, and scalability. This makes 802.11ah the perfect candidate for providing connectivity to Internet of Things (IoT) devices. One of these new features, referred to as the Restricted Access Window (RAW), focuses on improving scalability in highly dense deployments. RAW divides stations into groups and reduces contention and collisions by only allowing channel access to one group at a time. However, the standard does not dictate how to determine the optimal RAW grouping parameters. The optimal parameters depend on the current network conditions, and it has been shown that incorrect configuration severely impacts throughput, latency and energy efficiency. In this paper, we propose a traffic-adaptive RAW optimization algorithm (TAROA) to adapt the RAW parameters in real time based on the current traffic conditions, optimized for sensor networks in which each sensor transmits packets with a certain (predictable) frequency and may change the transmission frequency over time. The TAROA algorithm is executed at each target beacon transmission time (TBTT), and it first estimates the packet transmission interval of each station only based on packet transmission information obtained by access point (AP) during the last beacon interval. Then, TAROA determines the RAW parameters and assigns stations to RAW slots based on this estimated transmission frequency. The simulation results show that, compared to enhanced distributed channel access/distributed coordination function (EDCA/DCF), the TAROA algorithm can highly improve the performance of IEEE 802.11ah dense networks in terms of throughput, especially when hidden nodes exist, although it does not always achieve better latency performance. This paper contributes with a practical approach to optimizing RAW grouping under dynamic traffic in real time, which is a major leap towards applying RAW mechanism in real-life IoT networks.
Babulal, Ganesh M; Addison, Aaron; Ghoshal, Nupur; Stout, Sarah H; Vernon, Elizabeth K; Sellan, Mark; Roe, Catherine M
2016-01-01
Background : The number of older adults in the United States will double by 2056. Additionally, the number of licensed drivers will increase along with extended driving-life expectancy. Motor vehicle crashes are a leading cause of injury and death in older adults. Alzheimer's disease (AD) also negatively impacts driving ability and increases crash risk. Conventional methods to evaluate driving ability are limited in predicting decline among older adults. Innovations in GPS hardware and software can monitor driving behavior in the actual environments people drive in. Commercial off-the-shelf (COTS) devices are affordable, easy to install and capture large volumes of data in real-time. However, adapting these methodologies for research can be challenging. This study sought to adapt a COTS device and determine an interval that produced accurate data on the actual route driven for use in future studies involving older adults with and without AD. Methods : Three subjects drove a single course in different vehicles at different intervals (30, 60 and 120 seconds), at different times of day, morning (9:00-11:59AM), afternoon (2:00-5:00PM) and night (7:00-10pm). The nine datasets were examined to determine the optimal collection interval. Results : Compared to the 120-second and 60-second intervals, the 30-second interval was optimal in capturing the actual route driven along with the lowest number of incorrect paths and affordability weighing considerations for data storage and curation. Discussion : Use of COTS devices offers minimal installation efforts, unobtrusive monitoring and discreet data extraction. However, these devices require strict protocols and controlled testing for adoption into research paradigms. After reliability and validity testing, these devices may provide valuable insight into daily driving behaviors and intraindividual change over time for populations of older adults with and without AD. Data can be aggregated over time to look at changes or adverse events and ascertain if decline in performance is occurring.
NASA Astrophysics Data System (ADS)
Siswanto, A.; Kurniati, N.
2018-04-01
An oil and gas company has 2,268 oil and gas wells. Well Barrier Element (WBE) is installed in a well to protect human, prevent asset damage and minimize harm to the environment. The primary WBE component is Surface Controlled Subsurface Safety Valve (SCSSV). The secondary WBE component is Christmas Tree Valves that consist of four valves i.e. Lower Master Valve (LMV), Upper Master Valve (UMV), Swab Valve (SV) and Wing Valve (WV). Current practice on WBE Preventive Maintenance (PM) program is conducted by considering the suggested schedule as stated on manual. Corrective Maintenance (CM) program is conducted when the component fails unexpectedly. Both PM and CM need cost and may cause production loss. This paper attempts to analyze the failure data and reliability based on historical data. Optimal PM interval is determined in order to minimize the total cost of maintenance per unit time. The optimal PM interval for SCSSV is 730 days, LMV is 985 days, UMV is 910 days, SV is 900 days and WV is 780 days. In average of all components, the cost reduction by implementing the suggested interval is 52%, while the reliability is improved by 4% and the availability is increased by 5%.
Shalev, Varda; Rogowski, Ori; Shimron, Orit; Sheinberg, Bracha; Shapira, Itzhak; Seligsohn, Uri; Berliner, Shlomo; Misgav, Mudi
2007-01-01
The incidence of stroke in patients with atrial fibrillation (AF) can be significantly reduced with warfarin therapy especially if optimally controlled. To evaluate the effect of the interval between consecutive prothrombin time measurements on the time in therapeutic range (INR 2-3) in a cohort of patients with AF on chronic warfarin treatment in the community. All INR measurements available from a relatively large cohort of patients with chronic AF were reviewed and the mean interval between consecutive INR tests of each patient was correlated with the time in therapeutic range (TTR). Altogether 251,916 INR measurements performed in 4408 patients over a period of seven years were reviewed. Sixty percent of patients had their INR measured on average every 2 to 3 weeks and most others were followed at intervals of 4 weeks or longer. A small proportion (3.6%) had their INR measured on average every week. A significant decline in the time in therapeutic range was observed as the intervals between tests increased. At one to three weeks interval the TTR was 48%, at 4 weeks interval 45% and at 5 weeks 41% (P<0.0005). A five percent increment in TTR was observed if more tests were performed at multiplications of exactly 7 days (43% vs 48% P<0.0001). A better control with an increase in the TTR was observed in patients with atrial fibrillation if prothrombin time tests are performed at regular intervals of no longer than 3 weeks.
Wang, Rong; Cheng, Nan; Xiao, Cang-Song; Wu, Yang; Sai, Xiao-Yong; Gong, Zhi-Yun; Wang, Yao; Gao, Chang-Qing
2017-01-01
Background: The optimal timing of surgical revascularization for patients presenting with ST-segment elevation myocardial infarction (STEMI) and impaired left ventricular function is not well established. This study aimed to examine the timing of surgical revascularization after STEMI in patients with ischemic heart disease and left ventricular dysfunction (LVD) by comparing early and late results. Methods: From January 2003 to December 2013, there were 2276 patients undergoing isolated coronary artery bypass grafting (CABG) in our institution. Two hundred and sixty-four (223 male, 41 females) patients with a history of STEMI and LVD were divided into early revascularization (ER, <3 weeks), mid-term revascularization (MR, 3 weeks to 3 months), and late revascularization (LR, >3 months) groups according to the time interval from STEMI to CABG. Mortality and complication rates were compared among the groups by Fisher's exact test. Cox regression analyses were performed to examine the effect of the time interval of surgery on long-term survival. Results: No significant differences in 30-day mortality, long-term survival, freedom from all-cause death, and rehospitalization for heart failure existed among the groups (P > 0.05). More patients in the ER group (12.90%) had low cardiac output syndrome than those in the MR (2.89%) and LR (3.05%) groups (P = 0.035). The mean follow-up times were 46.72 ± 30.65, 48.70 ± 32.74, and 43.75 ± 32.43 months, respectively (P = 0.716). Cox regression analyses showed a severe preoperative condition (odds ratio = 7.13, 95% confidence interval 2.05–24.74, P = 0.002) rather than the time interval of CABG (P > 0.05) after myocardial infarction was a risk factor of long-term survival. Conclusions: Surgical revascularization for patients with STEMI and LVD can be performed at different times after STEMI with comparable operative mortality and long-term survival. However, ER (<3 weeks) has a higher incidence of postoperative low cardiac output syndrome. A severe preoperative condition rather than the time interval of CABG after STEMI is a risk factor of long-term survival. PMID:28218210
NASA Technical Reports Server (NTRS)
Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.
2009-01-01
.We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.
Quantum interference of position and momentum: A particle propagation paradox
NASA Astrophysics Data System (ADS)
Hofmann, Holger F.
2017-08-01
Optimal simultaneous control of position and momentum can be achieved by maximizing the probabilities of finding their experimentally observed values within two well-defined intervals. The assumption that particles move along straight lines in free space can then be tested by deriving a lower limit for the probability of finding the particle in a corresponding spatial interval at any intermediate time t . Here, it is shown that this lower limit can be violated by quantum superpositions of states confined within the respective position and momentum intervals. These violations of the particle propagation inequality show that quantum mechanics changes the laws of motion at a fundamental level, providing a different perspective on causality relations and time evolution in quantum mechanics.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
Meza, James M; Hickey, Edward J; Blackstone, Eugene H; Jaquiss, Robert D B; Anderson, Brett R; Williams, William G; Cai, Sally; Van Arsdell, Glen S; Karamlou, Tara; McCrindle, Brian W
2017-10-31
In infants requiring 3-stage single-ventricle palliation for hypoplastic left heart syndrome, attrition after the Norwood procedure remains significant. The effect of the timing of stage 2 palliation (S2P), a physician-modifiable factor, on long-term survival is not well understood. We hypothesized that an optimal interval between the Norwood and S2P that both minimizes pre-S2P attrition and maximizes post-S2P survival exists and is associated with individual patient characteristics. The National Institutes of Health/National Heart, Lung, and Blood Institute Pediatric Heart Network Single Ventricle Reconstruction Trial public data set was used. Transplant-free survival (TFS) was modeled from (1) Norwood to S2P and (2) S2P to 3 years by using parametric hazard analysis. Factors associated with death or heart transplantation were determined for each interval. To account for staged procedures, risk-adjusted, 3-year, post-Norwood TFS (the probability of TFS at 3 years given survival to S2P) was calculated using parametric conditional survival analysis. TFS from the Norwood to S2P was first predicted. TFS after S2P to 3 years was then predicted and adjusted for attrition before S2P by multiplying by the estimate of TFS to S2P. The optimal timing of S2P was determined by generating nomograms of risk-adjusted, 3-year, post-Norwood, TFS versus the interval from the Norwood to S2P. Of 547 included patients, 399 survived to S2P (73%). Of the survivors to S2P, 349 (87%) survived to 3-year follow-up. The median interval from the Norwood to S2P was 5.1 (interquartile range, 4.1-6.0) months. The risk-adjusted, 3-year, TFS was 68±7%. A Norwood-S2P interval of 3 to 6 months was associated with greatest 3-year TFS overall and in patients with few risk factors. In patients with multiple risk factors, TFS was severely compromised, regardless of the timing of S2P and most severely when S2P was performed early. No difference in the optimal timing of S2P existed when stratified by shunt type. In infants with few risk factors, progressing to S2P at 3 to 6 months after the Norwood procedure was associated with maximal TFS. Early S2P did not rescue patients with greater risk factor burdens. Instead, referral for heart transplantation may offer their best chance at long-term survival. URL: https://www.clinicaltrials.gov. Unique identifier: NCT00115934. © 2017 American Heart Association, Inc.
Time-variant random interval natural frequency analysis of structures
NASA Astrophysics Data System (ADS)
Wu, Binhua; Wu, Di; Gao, Wei; Song, Chongmin
2018-02-01
This paper presents a new robust method namely, unified interval Chebyshev-based random perturbation method, to tackle hybrid random interval structural natural frequency problem. In the proposed approach, random perturbation method is implemented to furnish the statistical features (i.e., mean and standard deviation) and Chebyshev surrogate model strategy is incorporated to formulate the statistical information of natural frequency with regards to the interval inputs. The comprehensive analysis framework combines the superiority of both methods in a way that computational cost is dramatically reduced. This presented method is thus capable of investigating the day-to-day based time-variant natural frequency of structures accurately and efficiently under concrete intrinsic creep effect with probabilistic and interval uncertain variables. The extreme bounds of the mean and standard deviation of natural frequency are captured through the embedded optimization strategy within the analysis procedure. Three particularly motivated numerical examples with progressive relationship in perspective of both structure type and uncertainty variables are demonstrated to justify the computational applicability, accuracy and efficiency of the proposed method.
Finding Intervals of Abrupt Change in Earth Science Data
NASA Astrophysics Data System (ADS)
Zhou, X.; Shekhar, S.; Liess, S.
2011-12-01
In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset reasonably fast. In earth science data (e.g., climate data), it is often observed that a persistently abrupt change in value occurs in a certain time-period or spatial interval. For example, abrupt climate change is defined as an unusually large shift of precipitation, temperature, etc, that occurs during a relatively short time period. A similar change pattern can also be found in geographical space, representing a sharp transition of the environment (e.g., vegetation between different ecological zones). Identifying such intervals of change from earth science datasets is a crucial step for understanding and attributing the underlying phenomenon. However, inconsistencies in these noisy datasets can obstruct the major change trend, and more importantly can complicate the search of the beginning and end points of the interval of change. Also, the large volume of data makes it challenging to process the dataset fast. In this work, we analyze earth science data using a novel, automated data mining approach to identify spatial/temporal intervals of persistent, abrupt change. We first propose a statistical model to quantitatively evaluate the change abruptness and persistence in an interval. Then we design an algorithm to exhaustively examine all the intervals using this model. Intervals passing a threshold test will be kept as final results. We evaluate the proposed method with the Climate Research Unit (CRU) precipitation data, whereby we focus on the Sahel rainfall index. Results show that this method can find periods of persistent and abrupt value changes with different temporal scales. We also further optimize the algorithm using a smart strategy, which always examines longer intervals before its subsets. By doing this, we reduce the computational cost to only one third of that of the original algorithm for the above test case. More significantly, the optimized algorithm is also proven to scale up well with data volume and number of changes. Particularly, it achieves better performance when dealing with longer change intervals.
Patient-specific Distraction Regimen to Avoid Growth-rod Failure.
Agarwal, Aakash; Jayaswal, Arvind; Goel, Vijay K; Agarwal, Anand K
2018-02-15
A finite element study to establish the relationship between patient's curve flexibility (determined using curve correction under gravity) in juvenile idiopathic scoliosis and the required distraction frequency to avoid growth rod fracture, as a function of time. To perform a parametric analysis using a juvenile scoliotic spine model (single mid-thoracic curve with the apex at the eighth thoracic vertebra) and establish the relationship between curve flexibility (determined using curve correction under gravity) and the distraction interval that allows a higher factor of safety for the growth rods. Previous studies have shown that frequent distraction with smaller magnitude of distractions are less likely to result in rod failure. However there has not been any methodology or a chart provided to apply this knowledge on to the individual patients that undergo the treatment. This study aims to fill in that gap. The parametric study was performed by varying the material properties of the disc, hence altering the axial stiffness of the scoliotic spine model. The stresses on the rod were found to increase with increased axial stiffness of the spine, and this resulted in the increase of required optimal frequency to achieve a factor of safety of two for growth rods. A relationship between the percentage correction in Cobb's angle due to gravity alone, and the required distraction interval for limiting the maximum von Mises stress to 255 MPa on the growth rods was established. The distraction interval required to limit the stresses to the selected nominal value reduces with increase in stiffness of the spine. Furthermore, the appropriate distraction interval reduces for each model as the spine becomes stiffer with time (autofusion). This points to the fact the optimal distraction frequency is a time-dependent variable that must be achieved to keep the maximum von Mises stress under the specified factor of safety. The current study demonstrates the possibility of translating fundamental information from finite element modeling to the clinical arena, for mitigating the occurrence of growth rod fracture, that is, establishing a relationship between optimal distraction interval and curve flexibility (determined using curve correction under gravity). N/A.
Single step optimization of manipulator maneuvers with variable structure control
NASA Technical Reports Server (NTRS)
Chen, N.; Dwyer, T. A. W., III
1987-01-01
One step ahead optimization has been recently proposed for spacecraft attitude maneuvers as well as for robot manipulator maneuvers. Such a technique yields a discrete time control algorithm implementable as a sequence of state-dependent, quadratic programming problems for acceleration optimization. Its sensitivity to model accuracy, for the required inversion of the system dynamics, is shown in this paper to be alleviated by a fast variable structure control correction, acting between the sampling intervals of the slow one step ahead discrete time acceleration command generation algorithm. The slow and fast looping concept chosen follows that recently proposed for optimal aiming strategies with variable structure control. Accelerations required by the VSC correction are reserved during the slow one step ahead command generation so that the ability to overshoot the sliding surface is guaranteed.
NASA Astrophysics Data System (ADS)
Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin
2017-10-01
Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.
Optimal design of clinical trials with biologics using dose-time-response models.
Lange, Markus R; Schmidli, Heinz
2014-12-30
Biologics, in particular monoclonal antibodies, are important therapies in serious diseases such as cancer, psoriasis, multiple sclerosis, or rheumatoid arthritis. While most conventional drugs are given daily, the effect of monoclonal antibodies often lasts for months, and hence, these biologics require less frequent dosing. A good understanding of the time-changing effect of the biologic for different doses is needed to determine both an adequate dose and an appropriate time-interval between doses. Clinical trials provide data to estimate the dose-time-response relationship with semi-mechanistic nonlinear regression models. We investigate how to best choose the doses and corresponding sample size allocations in such clinical trials, so that the nonlinear dose-time-response model can be precisely estimated. We consider both local and conservative Bayesian D-optimality criteria for the design of clinical trials with biologics. For determining the optimal designs, computer-intensive numerical methods are needed, and we focus here on the particle swarm optimization algorithm. This metaheuristic optimizer has been successfully used in various areas but has only recently been applied in the optimal design context. The equivalence theorem is used to verify the optimality of the designs. The methodology is illustrated based on results from a clinical study in patients with gout, treated by a monoclonal antibody. Copyright © 2014 John Wiley & Sons, Ltd.
Sell, Rebecca E; Sarno, Renee; Lawrence, Brenna; Castillo, Edward M; Fisher, Roger; Brainard, Criss; Dunford, James V; Davis, Daniel P
2010-07-01
The three-phase model of ventricular fibrillation (VF) arrest suggests a period of compressions to "prime" the heart prior to defibrillation attempts. In addition, post-shock compressions may increase the likelihood of return of spontaneous circulation (ROSC). The optimal intervals for shock delivery following cessation of compressions (pre-shock interval) and resumption of compressions following a shock (post-shock interval) remain unclear. To define optimal pre- and post-defibrillation compression pauses for out-of-hospital cardiac arrest (OOHCA). All patients suffering OOHCA from VF were identified over a 1-month period. Defibrillator data were abstracted and analyzed using the combination of ECG, impedance, and audio recording. Receiver-operator curve (ROC) analysis was used to define the optimal pre- and post-shock compression intervals. Multiple logistic regression analysis was used to quantify the relationship between these intervals and ROSC. Covariates included cumulative number of defibrillation attempts, intubation status, and administration of epinephrine in the immediate pre-shock compression cycle. Cluster adjustment was performed due to the possibility of multiple defibrillation attempts for each patient. A total of 36 patients with 96 defibrillation attempts were included. The ROC analysis identified an optimal pre-shock interval of <3s and an optimal post-shock interval of <6s. Increased likelihood of ROSC was observed with a pre-shock interval <3s (adjusted OR 6.7, 95% CI 2.0-22.3, p=0.002) and a post-shock interval of <6s (adjusted OR 10.7, 95% CI 2.8-41.4, p=0.001). Likelihood of ROSC was substantially increased with the optimization of both pre- and post-shock intervals (adjusted OR 13.1, 95% CI 3.4-49.9, p<0.001). Decreasing pre- and post-shock compression intervals increases the likelihood of ROSC in OOHCA from VF.
Fouda, Usama M; Gad Allah, Sherine H; Elshaer, Hesham S
2016-07-01
To determine the optimal timing of vaginal misoprostol administration in nulliparous women undergoing office hysteroscopy. Randomized double-blind placebo-controlled study. University teaching hospital. One hundred twenty nulliparous patients were randomly allocated in a 1:1 ratio to the long-interval misoprostol group or the short-interval misoprostol group. In the long-interval misoprostol group, two misoprostol tablets (400 μg) and two placebo tablets were administered vaginally at 12 and 3 hours, respectively, before office hysteroscopy. In the short-interval misoprostol group, two placebo tablets and two misoprostol tablets (400 μg) were administered vaginally 12 and 3 hours, respectively, before office hysteroscopy. The severity of pain was assessed by the patients with the use of a 100-mm visual analog scale (VAS). The operators assessed the ease of the passage of the hysteroscope through the cervical canal with the use of a 100-mm VAS as well. Pain scores during the procedure were significantly lower in the long-interval misoprostol group (37.98 ± 13.13 vs. 51.98 ± 20.68). In contrast, the pain scores 30 minutes after the procedure were similar between the two groups (11.92 ± 7.22 vs. 13.3 ± 6.73). Moreover, the passage of the hysteroscope through the cervical canal was easier in the long-interval misoprostol group (48.9 ± 17.79 vs. 58.28 ± 21.85). Vaginal misoprostol administration 12 hours before office hysteroscopy was more effective than vaginal misoprostol administration 3 hours before office hysteroscopy in relieving pain experienced by nulliparous patients undergoing office hysteroscopy. NCT02316301. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Effect of different rest intervals after whole-body vibration on vertical jump performance.
Dabbs, Nicole C; Muñoz, Colleen X; Tran, Tai T; Brown, Lee E; Bottaro, Martim
2011-03-01
Whole-body vibration (WBV) may potentiate vertical jump (VJ) performance via augmented muscular strength and motor function. The purpose of this study was to evaluate the effect of different rest intervals after WBV on VJ performance. Thirty recreationally trained subjects (15 men and 15 women) volunteered to participate in 4 testing visits separated by 24 hours. Visit 1 acted as a familiarization visit where subjects were introduced to the VJ and WBV protocols. Visits 2-4 contained 2 randomized conditions per visit with a 10-minute rest period between conditions. The WBV was administered on a pivotal platform with a frequency of 30 Hz and an amplitude of 6.5 mm in 4 bouts of 30 seconds for a total of 2 minutes with 30 seconds of rest between bouts. During WBV, subjects performed a quarter squat every 5 seconds, simulating a countermovement jump (CMJ). Whole-body vibration was followed by 3 CMJs with 5 different rest intervals: immediate, 30 seconds, 1 minute, 2 minutes, or 4 minutes. For a control condition, subjects performed squats with no WBV. There were no significant (p > 0.05) differences in peak velocity or relative ground reaction force after WBV rest intervals. However, results of VJ height revealed that maximum values, regardless of rest interval (56.93 ± 13.98 cm), were significantly (p < 0.05) greater than the control condition (54.44 ± 13.74 cm). Therefore, subjects' VJ height potentiated at different times after WBV suggesting strong individual differences in optimal rest interval. Coaches may use WBV to enhance acute VJ performance but should first identify each individual's optimal rest time to maximize the potentiating effects.
Kumar, Anupam; Kumar, Vijay
2017-05-01
In this paper, a novel concept of an interval type-2 fractional order fuzzy PID (IT2FO-FPID) controller, which requires fractional order integrator and fractional order differentiator, is proposed. The incorporation of Takagi-Sugeno-Kang (TSK) type interval type-2 fuzzy logic controller (IT2FLC) with fractional controller of PID-type is investigated for time response measure due to both unit step response and unit load disturbance. The resulting IT2FO-FPID controller is examined on different delayed linear and nonlinear benchmark plants followed by robustness analysis. In order to design this controller, fractional order integrator-differentiator operators are considered as design variables including input-output scaling factors. A new hybridized algorithm named as artificial bee colony-genetic algorithm (ABC-GA) is used to optimize the parameters of the controller while minimizing weighted sum of integral of time absolute error (ITAE) and integral of square of control output (ISCO). To assess the comparative performance of the IT2FO-FPID, authors compared it against existing controllers, i.e., interval type-2 fuzzy PID (IT2-FPID), type-1 fractional order fuzzy PID (T1FO-FPID), type-1 fuzzy PID (T1-FPID), and conventional PID controllers. Furthermore, to show the effectiveness of the proposed controller, the perturbed processes along with the larger dead time are tested. Moreover, the proposed controllers are also implemented on multi input multi output (MIMO), coupled, and highly complex nonlinear two-link robot manipulator system in presence of un-modeled dynamics. Finally, the simulation results explicitly indicate that the performance of the proposed IT2FO-FPID controller is superior to its conventional counterparts in most of the cases. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems
2006-01-01
i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback
Monitoring wastewater for assessing community health: Sewage Chemical-Information Mining (SCIM)
Timely assessment of the aggregate health of small-area human populations is essential for guiding the optimal investment of resources needed for preventing, avoiding, controlling, or mitigating exposure risks, as well as for maintaining or promoting health. Seeking those interve...
De Lara, Michel
2006-05-01
In their 1990 paper Optimal reproductive efforts and the timing of reproduction of annual plants in randomly varying environments, Amir and Cohen considered stochastic environments consisting of i.i.d. sequences in an optimal allocation discrete-time model. We suppose here that the sequence of environmental factors is more generally described by a Markov chain. Moreover, we discuss the connection between the time interval of the discrete-time dynamic model and the ability of the plant to rebuild completely its vegetative body (from reserves). We formulate a stochastic optimization problem covering the so-called linear and logarithmic fitness (corresponding to variation within and between years), which yields optimal strategies. For "linear maximizers'', we analyse how optimal strategies depend upon the environmental variability type: constant, random stationary, random i.i.d., random monotonous. We provide general patterns in terms of targets and thresholds, including both determinate and indeterminate growth. We also provide a partial result on the comparison between ;"linear maximizers'' and "log maximizers''. Numerical simulations are provided, allowing to give a hint at the effect of different mathematical assumptions.
An Efficient Interval Type-2 Fuzzy CMAC for Chaos Time-Series Prediction and Synchronization.
Lee, Ching-Hung; Chang, Feng-Yu; Lin, Chih-Min
2014-03-01
This paper aims to propose a more efficient control algorithm for chaos time-series prediction and synchronization. A novel type-2 fuzzy cerebellar model articulation controller (T2FCMAC) is proposed. In some special cases, this T2FCMAC can be reduced to an interval type-2 fuzzy neural network, a fuzzy neural network, and a fuzzy cerebellar model articulation controller (CMAC). So, this T2FCMAC is a more generalized network with better learning ability, thus, it is used for the chaos time-series prediction and synchronization. Moreover, this T2FCMAC realizes the un-normalized interval type-2 fuzzy logic system based on the structure of the CMAC. It can provide better capabilities for handling uncertainty and more design degree of freedom than traditional type-1 fuzzy CMAC. Unlike most of the interval type-2 fuzzy system, the type-reduction of T2FCMAC is bypassed due to the property of un-normalized interval type-2 fuzzy logic system. This causes T2FCMAC to have lower computational complexity and is more practical. For chaos time-series prediction and synchronization applications, the training architectures with corresponding convergence analyses and optimal learning rates based on Lyapunov stability approach are introduced. Finally, two illustrated examples are presented to demonstrate the performance of the proposed T2FCMAC.
Garg, Harish
2013-03-01
The main objective of the present paper is to propose a methodology for analyzing the behavior of the complex repairable industrial systems. In real-life situations, it is difficult to find the most optimal design policies for MTBF (mean time between failures), MTTR (mean time to repair) and related costs by utilizing available resources and uncertain data. For this, the availability-cost optimization model has been constructed for determining the optimal design parameters for improving the system design efficiency. The uncertainties in the data related to each component of the system are estimated with the help of fuzzy and statistical methodology in the form of the triangular fuzzy numbers. Using these data, the various reliability parameters, which affects the system performance, are obtained in the form of the fuzzy membership function by the proposed confidence interval based fuzzy Lambda-Tau (CIBFLT) methodology. The computed results by CIBFLT are compared with the existing fuzzy Lambda-Tau methodology. Sensitivity analysis on the system MTBF has also been addressed. The methodology has been illustrated through a case study of washing unit, the main part of the paper industry. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Di Molfetta, A; Santini, L; Forleo, G B; Minni, V; Mafhouz, K; Della Rocca, D G; Fresiello, L; Romeo, F; Ferrari, G
2012-01-01
In spite of cardiac resynchronization therapy (CRT) benefits, 25-30% of patients are still non responders. One of the possible reasons could be the non optimal atrioventricular (AV) and interventricular (VV) intervals settings. Our aim was to exploit a numerical model of cardiovascular system for AV and VV intervals optimization in CRT. A numerical model of the cardiovascular system CRT-dedicated was previously developed. Echocardiographic parameters, Systemic aortic pressure and ECG were collected in 20 consecutive patients before and after CRT. Patient data were simulated by the model that was used to optimize and set into the device the intervals at the baseline and at the follow up. The optimal AV and VV intervals were chosen to optimize the simulated selected variable/s on the base of both echocardiographic and electrocardiographic parameters. Intervals were different for each patient and in most cases, they changed at follow up. The model can well reproduce clinical data as verified with Bland Altman analysis and T-test (p > 0.05). Left ventricular remodeling was 38.7% and left ventricular ejection fraction increasing was 11% against the 15% and 6% reported in literature, respectively. The developed numerical model could reproduce patients conditions at the baseline and at the follow up including the CRT effects. The model could be used to optimize AV and VV intervals at the baseline and at the follow up realizing a personalized and dynamic CRT. A patient tailored CRT could improve patients outcome in comparison to literature data.
The phase shift hypothesis for the circadian component of winter depression
Lewy, Alfred J.; Rough, Jennifer N.; Songer, Jeannine B.; Mishra, Neelam; Yuhas, Krista; Emens, Jonathan S.
2007-01-01
The finding that bright light can suppress melatonin production led to the study of two situations, indeed, models, of light deprivation: totally blind people and winterdepressives. The leading hypothesis for winter depression (seasonal affective disorder, or SAD) is the phase shift hypothesis (PSH). The PSH was recently established in a study in which SAD patients were given low-dose melatonin in the afternoon/evening to cause phase advances, or in the morning to cause phase delays, or placebo. The prototypical phase-delayed patient as well as the smaller subgroup of phase-advanced patients, optimally responded to melatonin given at the correct time. Symptom severity improved as circadian misalignment was corrected. Orcadian misalignment is best measured as the time interval between the dim light melatonin onset (DLMO) and mid-sleep. Using the operational definition of the plasma DLMO as the interpolated time when melatonin levels continuously rise above the threshold of 10 pglmL, the average interval between DLMO and mid-sleep in healthy controls is 6 hours, which is associated with optimal mood in SAD patients. PMID:17969866
NASA Astrophysics Data System (ADS)
Postnov, Sergey
2017-11-01
Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.
Poller, Wolfram C; Dreger, Henryk; Schwerg, Marius; Melzer, Christoph
2015-01-01
Optimization of the AV-interval (AVI) in DDD pacemakers improves cardiac hemodynamics and reduces pacemaker syndromes. Manual optimization is typically not performed in clinical routine. In the present study we analyze the prevalence of E/A wave fusion and A wave truncation under resting conditions in 160 patients with complete AV block (AVB) under the pre-programmed AVI. We manually optimized sub-optimal AVI. We analyzed 160 pacemaker patients with complete AVB, both in sinus rhythm (AV-sense; n = 129) and under atrial pacing (AV-pace; n = 31). Using Doppler analyses of the transmitral inflow we classified the nominal AVI as: a) normal, b) too long (E/A wave fusion) or c) too short (A wave truncation). In patients with a sub-optimal AVI, we performed manual optimization according to the recommendations of the American Society of Echocardiography. All AVB patients with atrial pacing exhibited a normal transmitral inflow under the nominal AV-pace intervals (100%). In contrast, 25 AVB patients in sinus rhythm showed E/A wave fusion under the pre-programmed AV-sense intervals (19.4%; 95% confidence interval (CI): 12.6-26.2%). A wave truncations were not observed in any patient. All patients with a complete E/A wave fusion achieved a normal transmitral inflow after AV-sense interval reduction (mean optimized AVI: 79.4 ± 13.6 ms). Given the rate of 19.4% (CI 12.6-26.2%) of patients with a too long nominal AV-sense interval, automatic algorithms may prove useful in improving cardiac hemodynamics, especially in the subgroup of atrially triggered pacemaker patients with AV node diseases.
SUMIYOSHI, Toshiaki; ENDO, Natsumi; TANAKA, Tomomi; KAMOMAE, Hideo
2017-01-01
Relaxation of the intravaginal part of the uterus is obvious around 6 to 18 h before ovulation, and this is considered the optimal time for artificial insemination (AI), as demonstrated in recent studies. Estrous signs have been suggested as useful criteria for determining the optimal time for AI. Therefore, this study evaluated the usefulness of estrous signs, particularly the relaxation of the intravaginal part of the uterus, as criteria for determining the optimal time for AI. A Total of 100 lactating Holstein-Friesian cows kept in tie-stall barns were investigated. AI was carried out based on the criterion for the optimal time for AI (optimal group), and earlier (early group) and later (late group) than the optimal time for AI, determined on the basis of estrous signs. After AI, ovulation was assessed by rectal palpation and ultrasonographic observation at 6-h intervals. For 87.5% (35/40) of cows in the optimal group, AI was carried out 24-6 h before ovulation, which was previously accepted as the optimal time for AI. AI was carried out earlier (early group) and later (late group) than optimal time for AI in 62.1% (18/29) and 71.0% (22/31) of cows, respectively. The conception rate for the optimal group was 60.0%, and this conception rate was higher than that for the early group (44.8%) and late group (32.2%), without significance. Further, the conception rate of the optimal group was significantly higher than the sum of the conception rates of the early and late groups (38.3%; 23/60) (P < 0.05). These results indicate that the criteria postulated, relaxation of the intravaginal part of the uterus and other estrous signs are useful in determining the optimal time for AI. Furthermore, these estrous signs enable the estimations of stages in the periovulatory period. PMID:29081451
Intertrial interval duration and learning in autistic children.
Koegel, R L; Dunlap, G; Dyer, K
1980-01-01
This study investigated the influence of intertrial interval duration on the performance of autistic children during teaching situations. The children were taught under the same conditions existing in their regular programs, except that the length of time between trials was systematically manipulated. With both multiple baseline and repeated reversal designs, two lengths of intertrial interval were employed; short intervals with the SD for any given trial presented approximately one second following the reinforcer for the previous trial versus long intervals with the SD presented four or more seconds following the reinforcer for the previous trial. The results showed that: (1) the short intertrial intervals always produced higher levels of correct responding than the long intervals; and (2) there were improving trends in performance and rapid acquisition with the short intertrial intervals, in contrast to minimal or no change with the long intervals. The results are discussed in terms of utilizing information about child and task characteristics in terms of selecting optimal intervals. The data suggest that manipulations made between trials have a large influence on autistic children's learning. PMID:7364701
A Decision-making Model for a Two-stage Production-delivery System in SCM Environment
NASA Astrophysics Data System (ADS)
Feng, Ding-Zhong; Yamashiro, Mitsuo
A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.
Optimal time points sampling in pathway modelling.
Hu, Shiyan
2004-01-01
Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.
A Pulsar Time Scale Based on Parkes Observations in 1995-2010
NASA Astrophysics Data System (ADS)
Rodin, A. E.; Fedorova, V. A.
2018-06-01
Timing of highly stable millisecond pulsars provides the possibility of independently verifying terrestrial time scales on intervals longer than a year. An ensemble pulsar time scale is constructed based on pulsar timing data obtained on the 64-m Parkes telescope (Australia) in 1995-2010. Optimal Wiener filters were applied to enhance the accuracy of the ensemble time scale. The run of the time-scale difference PTens-TT(BIPM2011) does not exceed 0.8 ± 0.4 μs over the entire studied time interval. The fractional instability of the difference PTens-TT(BIPM2011) over 15 years is σ z = (0.6 ± 1.6) × 10-15, which corresponds to an upper limit for the energy density of the gravitational-wave background Ω g h 2 10-10 and variations in the gravitational potential 10-15 Hz at the frequency 2 × 10-9 Hz.
Determining Optimal Machine Replacement Events with Periodic Inspection Intervals
2013-03-01
10 2.3 Remaining Useful Life Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4...has some idea of the characteristic reliability inherent to that system. From assembly lines, to computers, to aircraft, quantities such as mean time...to failure, mean time to critical failure, and others have been quantified to a great extent. Further, any entity concerned with cost will also have an
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Approximate dynamic programming for optimal stationary control with control-dependent noise.
Jiang, Yu; Jiang, Zhong-Ping
2011-12-01
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Synchronic interval Gaussian mixed-integer programming for air quality management.
Cheng, Guanhui; Huang, Guohe Gordon; Dong, Cong
2015-12-15
To reveal the synchronism of interval uncertainties, the tradeoff between system optimality and security, the discreteness of facility-expansion options, the uncertainty of pollutant dispersion processes, and the seasonality of wind features in air quality management (AQM) systems, a synchronic interval Gaussian mixed-integer programming (SIGMIP) approach is proposed in this study. A robust interval Gaussian dispersion model is developed for approaching the pollutant dispersion process under interval uncertainties and seasonal variations. The reflection of synchronic effects of interval uncertainties in the programming objective is enabled through introducing interval functions. The proposition of constraint violation degrees helps quantify the tradeoff between system optimality and constraint violation under interval uncertainties. The overall optimality of system profits of an SIGMIP model is achieved based on the definition of an integrally optimal solution. Integer variables in the SIGMIP model are resolved by the existing cutting-plane method. Combining these efforts leads to an effective algorithm for the SIGMIP model. An application to an AQM problem in a region in Shandong Province, China, reveals that the proposed SIGMIP model can facilitate identifying the desired scheme for AQM. The enhancement of the robustness of optimization exercises may be helpful for increasing the reliability of suggested schemes for AQM under these complexities. The interrelated tradeoffs among control measures, emission sources, flow processes, receptors, influencing factors, and economic and environmental goals are effectively balanced. Interests of many stakeholders are reasonably coordinated. The harmony between economic development and air quality control is enabled. Results also indicate that the constraint violation degree is effective at reflecting the compromise relationship between constraint-violation risks and system optimality under interval uncertainties. This can help decision makers mitigate potential risks, e.g. insufficiency of pollutant treatment capabilities, exceedance of air quality standards, deficiency of pollution control fund, or imbalance of economic or environmental stress, in the process of guiding AQM. Copyright © 2015 Elsevier B.V. All rights reserved.
Early declaration of death by neurologic criteria results in greater organ donor potential.
Resnick, Shelby; Seamon, Mark J; Holena, Daniel; Pascual, Jose; Reilly, Patrick M; Martin, Niels D
2017-10-01
Aggressive management of patients prior to and after determination of death by neurologic criteria (DNC) is necessary to optimize organ recovery, transplantation, and increase the number of organs transplanted per donor (OTPD). The effects of time management are understudied but potentially pivotal component. The objective of this study was to analyze specific time points (time to DNC, time to procurement) and the time intervals between them to better characterize the optimal timeline of organ donation. Using data over a 5-year time period (2011-2015) from the largest US OPO, all patients with catastrophic brain injury and donated transplantable organs were retrospectively reviewed. Active smokers were excluded. Maximum donor potential was seven organs (heart, lungs [2], kidneys [2], liver, and pancreas). Time from admission to declaration of DNC and donation was calculated. Mean time points stratified by specific organ procurement rates and overall OTPD were compared using unpaired t-test. Of 1719 Declaration of Death by Neurologic Criteria organ donors, 381 were secondary to head trauma. Smokers and organs recovered but not transplanted were excluded leaving 297 patients. Males comprised 78.8%, the mean age was 36.0 (±16.8) years, and 87.6% were treated at a trauma center. Higher donor potential (>4 OTPD) was associated with shorter average times from admission to brain death; 66.6 versus 82.2 hours, P = 0.04. Lung donors were also associated with shorter average times from admission to brain death; 61.6 versus 83.6 hours, P = 0.004. The time interval from DNC to donation varied minimally among groups and did not affect donation rates. A shorter time interval between admission and declaration of DNC was associated with increased OTPD, especially lungs. Further research to identify what role timing plays in the management of the potential organ donor and how that relates to donor management goals is needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Gherardini, Stefano
2018-01-01
The improvement of clotting factor concentrates (CFCs) has undergone an impressive boost during the last six years. Since 2010, several new recombinant factor (rF)VIII/IX concentrates entered phase I/II/III clinical trials. The improvements are related to the culture of human embryonic kidney (HEK) cells, post-translational glycosylation, PEGylation, and co-expression of the fragment crystallizable (Fc) region of immunoglobulin (Ig)G1 or albumin genes in the manufacturing procedures. The extended half-life (EHL) CFCs allow an increase of the interval between bolus administrations during prophylaxis, a very important advantage for patients with difficulties in venous access. Although the inhibitor risk has not been fully established, phase III studies have provided standard prophylaxis protocols, which, compared with on-demand treatment, have achieved very low annualized bleeding rates (ABRs). The key pharmacokinetics (PK) parameter to tailor patient therapy is clearance, which is more reliable than the half-life of CFCs; the clearance considers the decay rate of the drug concentration–time profile, while the half-life considers only the half concentration of the drug at a given time. To tailor the prophylaxis of hemophilia patients in real-life, we propose two formulae (expressed in terms of the clearance, trough and dose interval between prophylaxis), respectively based on the one- and two-compartmental models (CMs), for the prediction of the optimal single dose of EHL CFCs. Once the data from the time decay of the CFCs are fitted by the one- or two-CMs after an individual PK analysis, such formulae provide to the treater the optimal trade-off among trough and time-intervals between boluses. In this way, a sufficiently long time-interval between bolus administration could be guaranteed for a wider class of patients, with a preassigned level of the trough. Finally, a PK approach using repeated dosing is discussed, and some examples with new EHL CFCs are shown. PMID:29899890
Model Based Optimal Control, Estimation, and Validation of Lithium-Ion Batteries
NASA Astrophysics Data System (ADS)
Perez, Hector Eduardo
This dissertation focuses on developing and experimentally validating model based control techniques to enhance the operation of lithium ion batteries, safely. An overview of the contributions to address the challenges that arise are provided below. Chapter 1: This chapter provides an introduction to battery fundamentals, models, and control and estimation techniques. Additionally, it provides motivation for the contributions of this dissertation. Chapter 2: This chapter examines reference governor (RG) methods for satisfying state constraints in Li-ion batteries. Mathematically, these constraints are formulated from a first principles electrochemical model. Consequently, the constraints explicitly model specific degradation mechanisms, such as lithium plating, lithium depletion, and overheating. This contrasts with the present paradigm of limiting measured voltage, current, and/or temperature. The critical challenges, however, are that (i) the electrochemical states evolve according to a system of nonlinear partial differential equations, and (ii) the states are not physically measurable. Assuming available state and parameter estimates, this chapter develops RGs for electrochemical battery models. The results demonstrate how electrochemical model state information can be utilized to ensure safe operation, while simultaneously enhancing energy capacity, power, and charge speeds in Li-ion batteries. Chapter 3: Complex multi-partial differential equation (PDE) electrochemical battery models are characterized by parameters that are often difficult to measure or identify. This parametric uncertainty influences the state estimates of electrochemical model-based observers for applications such as state-of-charge (SOC) estimation. This chapter develops two sensitivity-based interval observers that map bounded parameter uncertainty to state estimation intervals, within the context of electrochemical PDE models and SOC estimation. Theoretically, this chapter extends the notion of interval observers to PDE models using a sensitivity-based approach. Practically, this chapter quantifies the sensitivity of battery state estimates to parameter variations, enabling robust battery management schemes. The effectiveness of the proposed sensitivity-based interval observers is verified via a numerical study for the range of uncertain parameters. Chapter 4: This chapter seeks to derive insight on battery charging control using electrochemistry models. Directly using full order complex multi-partial differential equation (PDE) electrochemical battery models is difficult and sometimes impossible to implement. This chapter develops an approach for obtaining optimal charge control schemes, while ensuring safety through constraint satisfaction. An optimal charge control problem is mathematically formulated via a coupled reduced order electrochemical-thermal model which conserves key electrochemical and thermal state information. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting nonlinear multi-state optimal control problem. Minimum time charge protocols are analyzed in detail subject to solid and electrolyte phase concentration constraints, as well as temperature constraints. The optimization scheme is examined using different input current bounds, and an insight on battery design for fast charging is provided. Experimental results are provided to compare the tradeoffs between an electrochemical-thermal model based optimal charge protocol and a traditional charge protocol. Chapter 5: Fast and safe charging protocols are crucial for enhancing the practicality of batteries, especially for mobile applications such as smartphones and electric vehicles. This chapter proposes an innovative approach to devising optimally health-conscious fast-safe charge protocols. A multi-objective optimal control problem is mathematically formulated via a coupled electro-thermal-aging battery model, where electrical and aging sub-models depend upon the core temperature captured by a two-state thermal sub-model. The Legendre-Gauss-Radau (LGR) pseudo-spectral method with adaptive multi-mesh-interval collocation is employed to solve the resulting highly nonlinear six-state optimal control problem. Charge time and health degradation are therefore optimally traded off, subject to both electrical and thermal constraints. Minimum-time, minimum-aging, and balanced charge scenarios are examined in detail. Sensitivities to the upper voltage bound, ambient temperature, and cooling convection resistance are investigated as well. Experimental results are provided to compare the tradeoffs between a balanced and traditional charge protocol. Chapter 6: This chapter provides concluding remarks on the findings of this dissertation and a discussion of future work.
Nie, Xianghui; Huang, Guo H; Li, Yongping
2009-11-01
This study integrates the concepts of interval numbers and fuzzy sets into optimization analysis by dynamic programming as a means of accounting for system uncertainty. The developed interval fuzzy robust dynamic programming (IFRDP) model improves upon previous interval dynamic programming methods. It allows highly uncertain information to be effectively communicated into the optimization process through introducing the concept of fuzzy boundary interval and providing an interval-parameter fuzzy robust programming method for an embedded linear programming problem. Consequently, robustness of the optimization process and solution can be enhanced. The modeling approach is applied to a hypothetical problem for the planning of waste-flow allocation and treatment/disposal facility expansion within a municipal solid waste (MSW) management system. Interval solutions for capacity expansion of waste management facilities and relevant waste-flow allocation are generated and interpreted to provide useful decision alternatives. The results indicate that robust and useful solutions can be obtained, and the proposed IFRDP approach is applicable to practical problems that are associated with highly complex and uncertain information.
Determination of the optimal atrioventricular interval in sick sinus syndrome during DDD pacing.
Kato, Masaya; Dote, Keigo; Sasaki, Shota; Goto, Kenji; Takemoto, Hiroaki; Habara, Seiji; Hasegawa, Daiji; Matsuda, Osamu
2005-09-01
Although the AAI pacing mode has been shown to be electromechanically superior to the DDD pacing mode in sick sinus syndrome (SSS), there is evidence suggesting that during AAI pacing the presence of natural ventricular activation pattern is not enough for hemodynamic benefit to occur. Myocardial performance index (MPI) is a simply measurable Doppler-derived index of combined systolic and diastolic myocardial performance. The aim of this study was to investigate whether AAI pacing mode is electromechanically superior to the DDD mode in patients with SSS by using Doppler-derived MPI. Thirty-nine SSS patients with dual-chamber pacing devices were evaluated by using Doppler echocardiography in AAI mode and DDD mode. The optimal atrioventricular (AV) interval in DDD mode was determined and atrial stimulus-R interval was measured in AAI mode. The ratio of the atrial stimulus-R interval to the optimal AV interval was defined as relative AV interval (rAVI) and the ratio of MPI in AAI mode to that in DDD mode was defined as relative MPI (rMPI). The rMPI was significantly correlated with atrial stimulus-R interval and rAVI (r = 0.57, P = 0.0002, and r = 0.67, P < 0.0001, respectively). A cutoff point of 1.73 for rAVI provided optimum sensitivity and specificity for rMPI >1 based on the receiver operator curves. Even though the intrinsic AV conduction is moderately prolonged, some SSS patients with dual-chamber pacing devices benefit from the ventricular pacing with optimal AV interval. MPI is useful to determine the optimal pacing mode in acute experiment.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
Optimization of antitumor treatment conditions for transcutaneous CO2 application: An in vivo study.
Ueha, Takeshi; Kawamoto, Teruya; Onishi, Yasuo; Harada, Risa; Minoda, Masaya; Toda, Mitsunori; Hara, Hitomi; Fukase, Naomasa; Kurosaka, Masahiro; Kuroda, Ryosuke; Akisue, Toshihiro; Sakai, Yoshitada
2017-06-01
Carbon dioxide (CO2) therapy can be applied to treat a variety of disorders. We previously found that transcutaneous application of CO2 with a hydrogel decreased the tumor volume of several types of tumors and induced apoptosis via the mitochondrial pathway. However, only one condition of treatment intensity has been tested. For widespread application in clinical antitumor therapy, the conditions must be optimized. In the present study, we investigated the relationship between the duration, frequency, and treatment interval of transcutaneous CO2 application and antitumor effects in murine xenograft models. Murine xenograft models of three types of human tumors (breast cancer, osteosarcoma, and malignant fibrous histiocytoma/undifferentiated pleomorphic sarcoma) were used to assess the antitumor effects of transcutaneous CO2 application of varying durations, frequencies, and treatment intervals. In all human tumor xenografts, apoptosis was significantly induced by CO2 treatment for ≥10 min, and a significant decrease in tumor volume was observed with CO2 treatments of >5 min. The effect on tumor volume was not dependent on the frequency of CO2 application, i.e., twice or five times per week. However, treatment using 3- and 4-day intervals was more effective at decreasing tumor volume than treatment using 2- and 5-day intervals. The optimal conditions of transcutaneous CO2 application to obtain the best antitumor effect in various tumors were as follows: greater than 10 min per application, twice per week, with 3- and 4-day intervals, and application to the site of the tumor. The results suggest that this novel transcutaneous CO2 application might be useful to treat primary tumors, while mitigating some side effects, and therefore could be safe for clinical trials.
QT-RR relationships and suitable QT correction formulas for halothane-anesthetized dogs.
Tabo, Mitsuyasu; Nakamura, Mikiko; Kimura, Kazuya; Ito, Shigeo
2006-10-01
Several QT correction (QTc) formulas have been used for assessing the QT liability of drugs. However, they are known to under- and over-correct the QT interval and tend to be specific to species and experimental conditions. The purpose of this study was to determine a suitable formula for halothane-anesthetized dogs highly sensitive to drug-induced QT interval prolongation. Twenty dogs were anesthetized with 1.5% halothane and the relationship between the QT and RR intervals were obtained by changing the heart rate under atrial pacing conditions. The QT interval was corrected for the RR interval by applying 4 published formulas (Bazett, Fridericia, Van de Water, and Matsunaga); Fridericia's formula (QTcF = QT/RR(0.33)) showed the least slope and lowest R(2) value for the linear regression of QTc intervals against RR intervals, indicating that it dissociated changes in heart rate most effectively. An optimized formula (QTcX = QT/RR(0.3879)) is defined by analysis of covariance and represents a correction algorithm superior to Fridericia's formula. For both Fridericia's and the optimized formula, QT-prolonging drugs (d,l-sotalol, astemizole) showed QTc interval prolongation. A non-QT-prolonging drug (d,l-propranolol) failed to prolong the QTc interval. In addition, drug-induced changes in QTcF and QTcX intervals were highly correlated with those of the QT interval paced at a cycle length of 500 msec. These findings suggest that Fridericia's and the optimized formula, although the optimized is a little bit better, are suitable for correcting the QT interval in halothane-anesthetized dogs and help to evaluate the potential QT prolongation of drugs with high accuracy.
Pant, Jeevan K; Krishnan, Sridhar
2016-07-01
A new signal reconstruction algorithm for compressive sensing based on the minimization of a pseudonorm which promotes block-sparse structure on the first-order difference of the signal is proposed. Involved optimization is carried out by using a sequential version of Fletcher-Reeves' conjugate-gradient algorithm, and the line search is based on Banach's fixed-point theorem. The algorithm is suitable for the reconstruction of foot gait signals which admit block-sparse structure on the first-order difference. An additional algorithm for the estimation of stride-interval, swing-interval, and stance-interval time series from the reconstructed foot gait signals is also proposed. This algorithm is based on finding zero crossing indices of the foot gait signal and using the resulting indices for the computation of time series. Extensive simulation results demonstrate that the proposed signal reconstruction algorithm yields improved signal-to-noise ratio and requires significantly reduced computational effort relative to several competing algorithms over a wide range of compression ratio. For a compression ratio in the range from 88% to 94%, the proposed algorithm is found to offer improved accuracy for the estimation of clinically relevant time-series parameters, namely, the mean value, variance, and spectral index of stride-interval, stance-interval, and swing-interval time series, relative to its nearest competitor algorithm. The improvement in performance for compression ratio as high as 94% indicates that the proposed algorithms would be useful for designing compressive sensing-based systems for long-term telemonitoring of human gait signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McJunkin, Timothy; Epiney, Aaron; Rabiti, Cristian
2017-06-01
This report provides a summary of the effort in the Nuclear-Renewable Hybrid Energy System (N-R HES) project on the level 4 milestone to consider integration of existing grid models into the factors for optimization on shorter time intervals than the existing electric grid models with the Risk Analysis Virtual Environment (RAVEN) and Modelica [1] optimizations and economic analysis that are the focus of the project to date.
Outcome of total knee replacement following explantation and cemented spacer therapy.
Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E; Josten, Christoph
2016-01-01
Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient's group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually.
Outcome of total knee replacement following explantation and cemented spacer therapy
Ghanem, Mohamed; Zajonz, Dirk; Bollmann, Juliane; Geissler, Vanessa; Prietzel, Torsten; Moche, Michael; Roth, Andreas; Heyde, Christoph-E.; Josten, Christoph
2016-01-01
Background: Infection after total knee replacement (TKR) is one of the serious complications which must be pursued with a very effective therapeutic concept. In most cases this means revision arthroplasty, in which one-setting and two-setting procedures are distinguished. Healing of infection is the conditio sine qua non for re-implantation. This retrospective work presents an assessment of the success rate after a two-setting revision arthroplasty of the knee following periprosthetic infection. It further considers drawing conclusions concerning the optimal timing of re-implantation. Patients and methods: A total of 34 patients have been enclosed in this study from September 2005 to December 2013. 35 re-implantations were carried out following explantation of total knee and implantation of cemented spacer. The patient’s group comprised of 53% (18) males and 47% (16) females. The average age at re-implantation time was 72.2 years (ranging from 54 to 85 years). We particularly evaluated the microbial spectrum, the interval between explantation and re-implantation, the number of surgeries that were necessary prior to re-implantation as well as the postoperative course. Results: We reported 31.4% (11) reinfections following re-implantation surgeries. The number of the reinfections declined with increasing time interval between explantation and re-implantation. Patients who developed reinfections were operated on (re-implantation) after an average of 4.47 months. Those patients with uncomplicated course were operated on (re-implantation) after an average of 6.79 months. Nevertheless, we noticed no essential differences in outcome with regard to the number of surgeries carried out prior to re-implantation. Mobile spacers proved better outcome than temporary arthrodesis with intramedullary fixation. Conclusion: No uniform strategy of treatment exists after peri-prosthetic infections. In particular, no optimal timing can be stated concerning re-implantation. Our data point out to the fact that a longer time interval between explantation and re-implantation reduces the rate of reinfection. From our point of view, the optimal timing for re-implantation depends on various specific factors and therefore it should be defined individually. PMID:27066391
Efficiency in Second Language Vocabulary Learning
ERIC Educational Resources Information Center
Schuetze, Ulf
2017-01-01
An ongoing question in second language vocabulary learning is how to optimize the acquisition of words. One approach is the so-called "spaced repetition technique" that uses intervals to repeat words in a given time frame (Balota et al., 2007; Leitner, 1972; Oxford, 1990; Pimsleur, 1967; Roediger & Karpicke, 2010; Schuetze &…
Zhou, Wenliang; Yang, Xia; Deng, Lianbo
2014-01-01
Not only is the operating plan the basis of organizing marshalling station's operation, but it is also used to analyze in detail the capacity utilization of each facility in marshalling station. In this paper, a long-term operating plan is optimized mainly for capacity utilization analysis. Firstly, a model is developed to minimize railcars' average staying time with the constraints of minimum time intervals, marshalling track capacity, and so forth. Secondly, an algorithm is designed to solve this model based on genetic algorithm (GA) and simulation method. It divides the plan of whole planning horizon into many subplans, and optimizes them with GA one by one in order to obtain a satisfactory plan with less computing time. Finally, some numeric examples are constructed to analyze (1) the convergence of the algorithm, (2) the effect of some algorithm parameters, and (3) the influence of arrival train flow on the algorithm. PMID:25525614
New Multi-objective Uncertainty-based Algorithm for Water Resource Models' Calibration
NASA Astrophysics Data System (ADS)
Keshavarz, Kasra; Alizadeh, Hossein
2017-04-01
Water resource models are powerful tools to support water management decision making process and are developed to deal with a broad range of issues including land use and climate change impacts analysis, water allocation, systems design and operation, waste load control and allocation, etc. These models are divided into two categories of simulation and optimization models whose calibration has been addressed in the literature where great relevant efforts in recent decades have led to two main categories of auto-calibration methods of uncertainty-based algorithms such as GLUE, MCMC and PEST and optimization-based algorithms including single-objective optimization such as SCE-UA and multi-objective optimization such as MOCOM-UA and MOSCEM-UA. Although algorithms which benefit from capabilities of both types, such as SUFI-2, were rather developed, this paper proposes a new auto-calibration algorithm which is capable of both finding optimal parameters values regarding multiple objectives like optimization-based algorithms and providing interval estimations of parameters like uncertainty-based algorithms. The algorithm is actually developed to improve quality of SUFI-2 results. Based on a single-objective, e.g. NSE and RMSE, SUFI-2 proposes a routine to find the best point and interval estimation of parameters and corresponding prediction intervals (95 PPU) of time series of interest. To assess the goodness of calibration, final results are presented using two uncertainty measures of p-factor quantifying percentage of observations covered by 95PPU and r-factor quantifying degree of uncertainty, and the analyst has to select the point and interval estimation of parameters which are actually non-dominated regarding both of the uncertainty measures. Based on the described properties of SUFI-2, two important questions are raised, answering of which are our research motivation: Given that in SUFI-2, final selection is based on the two measures or objectives and on the other hand, knowing that there is no multi-objective optimization mechanism in SUFI-2, are the final estimations Pareto-optimal? Can systematic methods be applied to select the final estimations? Dealing with these questions, a new auto-calibration algorithm was proposed where the uncertainty measures were considered as two objectives to find non-dominated interval estimations of parameters by means of coupling Monte Carlo simulation and Multi-Objective Particle Swarm Optimization. Both the proposed algorithm and SUFI-2 were applied to calibrate parameters of water resources planning model of Helleh river basin, Iran. The model is a comprehensive water quantity-quality model developed in the previous researches using WEAP software in order to analyze the impacts of different water resources management strategies including dam construction, increasing cultivation area, utilization of more efficient irrigation technologies, changing crop pattern, etc. Comparing the Pareto frontier resulted from the proposed auto-calibration algorithm with SUFI-2 results, it was revealed that the new algorithm leads to a better and also continuous Pareto frontier, even though it is more computationally expensive. Finally, Nash and Kalai-Smorodinsky bargaining methods were used to choose compromised interval estimation regarding Pareto frontier.
Zhang, Qing; Fung, Jeffrey Wing-Hong; Chan, Yat-Sun; Chan, Hamish Chi-Kin; Lin, Hong; Chan, Skiva; Yu, Cheuk-Man
2008-02-29
Cardiac resynchronization therapy (CRT) is an effective therapy for heart failure patients with electromechanical delay. Optimization of atrioventricular interval (AVI) is a cardinal component for the benefits. However, it is unknown if the AVI needs to be re-optimized during long-term follow-up. Thirty-one patients (66+/-11 years, 20 males) with sinus rhythm who received CRT underwent serial optimization of AVI at day 1, 3-month and during long-term follow-up by pulse Doppler echocardiography (PDE). At long-term follow-up, the optimal AVI and cardiac output (CO) estimated by non-invasive impedance cardiography (ICG) were compared with those by PDE. The follow-up was 16+/-11 months. There was no significant difference in the mean optimal AVI when compared between any 2 time points among day 1 (99+/-30 ms), 3-month (97+/-28 ms) and long-term follow-up (94+/-28 ms). However, in individual patient, the optimal AVI remained unchanged only in 14 patients (44%), and was shortened in 12 (38%) and lengthened in 6 patients (18%). During long-term follow-up, although the mean optimal AVIs obtained by PDE or ICG (94+/-28 vs. 92+/-29 ms) were not different, a discrepancy was found in 14 patients (45%). For the same AVI, the CO measured by ICG was systematically higher than that by PDE (3.5+/-0.8 Vs. 2.7+/-0.6 L/min, p<0.001). Optimization of AVI after CRT appears necessary during follow-up as it was readjusted in 55% of patients. Although AVI optimization by ICG was feasible, further studies are needed to confirm its role in optimizing AVI after CRT.
Estimation of the cloud transmittance from radiometric measurements at the ground level
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Dario; Mares, Oana, E-mail: mareshoana@yahoo.com
2014-11-24
The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690–697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in amore » short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.« less
Estimation of the cloud transmittance from radiometric measurements at the ground level
NASA Astrophysics Data System (ADS)
Costa, Dario; Mares, Oana
2014-11-01
The extinction of solar radiation due to the clouds is more significant than due to any other atmospheric constituent, but it is always difficult to be modeled because of the random distribution of clouds on the sky. Moreover, the transmittance of a layer of clouds is in a very complex relation with their type and depth. A method for estimating cloud transmittance was proposed in Paulescu et al. (Energ. Convers. Manage, 75 690-697, 2014). The approach is based on the hypothesis that the structure of the cloud covering the sun at a time moment does not change significantly in a short time interval (several minutes). Thus, the cloud transmittance can be calculated as the estimated coefficient of a simple linear regression for the computed versus measured solar irradiance in a time interval Δt. The aim of this paper is to optimize the length of the time interval Δt. Radiometric data measured on the Solar Platform of the West University of Timisoara during 2010 at a frequency of 1/15 seconds are used in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kleijnen, J; Asselen, B van; Burbach, M
2015-06-15
Purpose: Purpose of this study is to find the optimal trade-off between adaptation interval and margin reduction and to define the implications of motion for rectal cancer boost radiotherapy on a MR-linac. Methods: Daily MRI scans were acquired of 16 patients, diagnosed with rectal cancer, prior to each radiotherapy fraction in one week (N=76). Each scan session consisted of T2-weighted and three 2D sagittal cine-MRI, at begin (t=0 min), middle (t=9:30 min) and end (t=18:00 min) of scan session, for 1 minute at 2 Hz temporal resolution. Tumor and clinical target volume (CTV) were delineated on each T2-weighted scan andmore » transferred to each cine-MRI. The start frame of the begin scan was used as reference and registered to frames at time-points 15, 30 and 60 seconds, 9:30 and 18:00 minutes and 1, 2, 3 and 4 days later. Per time-point, motion of delineated voxels was evaluated using the deformation vector fields of the registrations and the 95th percentile distance (dist95%) was calculated as measure of motion. Per time-point, the distance that includes 90% of all cases was taken as estimate of required planning target volume (PTV)-margin. Results: Highest motion reduction is observed going from 9:30 minutes to 60 seconds. We observe a reduction in margin estimates from 10.6 to 2.7 mm and 16.1 to 4.6 mm for tumor and CTV, respectively, when adapting every 60 seconds compared to not adapting treatment. A 75% and 71% reduction, respectively. Further reduction in adaptation time-interval yields only marginal motion reduction. For adaptation intervals longer than 18:00 minutes only small motion reductions are observed. Conclusion: The optimal adaptation interval for adaptive rectal cancer (boost) treatments on a MR-linac is 60 seconds. This results in substantial smaller PTV-margin estimates. Adaptation intervals of 18:00 minutes and higher, show little improvement in motion reduction.« less
Automated Interval velocity picking for Atlantic Multi-Channel Seismic Data
NASA Astrophysics Data System (ADS)
Singh, Vishwajit
2016-04-01
This paper described the challenge in developing and testing a fully automated routine for measuring interval velocities from multi-channel seismic data. Various approaches are employed for generating an interactive algorithm picking interval velocity for continuous 1000-5000 normal moveout (NMO) corrected gather and replacing the interpreter's effort for manual picking the coherent reflections. The detailed steps and pitfalls for picking the interval velocities from seismic reflection time measurements are describe in these approaches. Key ingredients these approaches utilized for velocity analysis stage are semblance grid and starting model of interval velocity. Basin-Hopping optimization is employed for convergence of the misfit function toward local minima. SLiding-Overlapping Window (SLOW) algorithm are designed to mitigate the non-linearity and ill- possessedness of root-mean-square velocity. Synthetic data case studies addresses the performance of the velocity picker generating models perfectly fitting the semblance peaks. A similar linear relationship between average depth and reflection time for synthetic model and estimated models proposed picked interval velocities as the starting model for the full waveform inversion to project more accurate velocity structure of the subsurface. The challenges can be categorized as (1) building accurate starting model for projecting more accurate velocity structure of the subsurface, (2) improving the computational cost of algorithm by pre-calculating semblance grid to make auto picking more feasible.
A single-loop optimization method for reliability analysis with second order uncertainty
NASA Astrophysics Data System (ADS)
Xie, Shaojun; Pan, Baisong; Du, Xiaoping
2015-08-01
Reliability analysis may involve random variables and interval variables. In addition, some of the random variables may have interval distribution parameters owing to limited information. This kind of uncertainty is called second order uncertainty. This article develops an efficient reliability method for problems involving the three aforementioned types of uncertain input variables. The analysis produces the maximum and minimum reliability and is computationally demanding because two loops are needed: a reliability analysis loop with respect to random variables and an interval analysis loop for extreme responses with respect to interval variables. The first order reliability method and nonlinear optimization are used for the two loops, respectively. For computational efficiency, the two loops are combined into a single loop by treating the Karush-Kuhn-Tucker (KKT) optimal conditions of the interval analysis as constraints. Three examples are presented to demonstrate the proposed method.
Relationship between heart rate and quiescent interval of the cardiac cycle in children using MRI.
Zhang, Wei; Bogale, Saivivek; Golriz, Farahnaz; Krishnamurthy, Rajesh
2017-11-01
Imaging the heart in children comes with the challenge of constant cardiac motion. A prospective electrocardiography-triggered CT scan allows for scanning during a predetermined phase of the cardiac cycle with least motion. This technique requires knowing the optimal quiescent intervals of cardiac cycles in a pediatric population. To evaluate high-temporal-resolution cine MRI of the heart in children to determine the relationship of heart rate to the optimal quiescent interval within the cardiac cycle. We included a total of 225 consecutive patients ages 0-18 years who had high-temporal-resolution cine steady-state free-precession sequence performed as part of a magnetic resonance imaging (MRI) or magnetic resonance angiography study of the heart. We determined the location and duration of the quiescent interval in systole and diastole for heart rates ranging 40-178 beats per minute (bpm). We performed the Wilcoxon signed rank test to compare the duration of quiescent interval in systole and diastole for each heart rate group. The duration of the quiescent interval at heart rates <80 bpm and >90 bpm was significantly longer in diastole and systole, respectively (P<.0001 for all ranges, except for 90-99 bpm [P=.02]). For heart rates 80-89 bpm, diastolic interval was longer than systolic interval, but the difference was not statistically significant (P=.06). We created a chart depicting optimal quiescent intervals across a range of heart rates that could be applied for prospective electrocardiography-triggered CT imaging of the heart. The optimal quiescent interval at heart rates <80 bpm is in diastole and at heart rates ≥90 bpm is in systole. The period of quiescence at heart rates 80-89 bpm is uniformly short in systole and diastole.
Techniques for Increasing the Efficiency of Automation Systems in School Library Media Centers.
ERIC Educational Resources Information Center
Caffarella, Edward P.
1996-01-01
Discusses methods of managing queues (waiting lines) to optimize the use of student computer stations in school library media centers and to make searches more efficient and effective. The three major factors in queue management are arrival interval of the patrons, service time, and number of stations. (Author/LRW)
The Amygdalo-Nigrostriatal Network Is Critical for an Optimal Temporal Performance
ERIC Educational Resources Information Center
Es-seddiqi, Mouna; El Massioui, Nicole; Samson, Nathalie; Brown, Bruce L.; Doyère, Valérie
2016-01-01
The amygdalo-nigrostriatal (ANS) network plays an essential role in enhanced attention to significant events. Interval timing requires attention to temporal cues. We assessed rats having a disconnected ANS network, due to contralateral lesions of the medial central nucleus of the amygdala (CEm) and dopaminergic afferents to the lateral striatum,…
Minimum cost to control bovine tuberculosis in cow-calf herds
Smith, Rebecca L.; Tauer, Loren W.; Sanderson, Michael W.; Grohn, Yrjo T.
2014-01-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2–3 month testing interval. However, the optimal testing interval for governments was increased to 2–4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal’s herd of origin can be identified would result in faster detection times. PMID:24703601
Minimum cost to control bovine tuberculosis in cow-calf herds.
Smith, Rebecca L; Tauer, Loren W; Sanderson, Michael W; Gröhn, Yrjo T
2014-07-01
Bovine tuberculosis (bTB) outbreaks in US cattle herds, while rare, are expensive to control. A stochastic model for bTB control in US cattle herds was adapted to more accurately represent cow-calf herd dynamics and was validated by comparison to 2 reported outbreaks. Control cost calculations were added to the model, which was then optimized to minimize costs for either the farm or the government. The results of the optimization showed that test-and-removal costs were minimized for both farms and the government if only 2 negative whole-herd tests were required to declare a herd free of infection, with a 2-3 month testing interval. However, the optimal testing interval for governments was increased to 2-4 months if the model was constrained to reject control programs leading to an infected herd being declared free of infection. Although farms always preferred test-and-removal to depopulation from a cost standpoint, government costs were lower with depopulation more than half the time in 2 of 8 regions. Global sensitivity analysis showed that indemnity costs were significantly associated with a rise in the cost to the government, and that low replacement rates were responsible for the long time to detection predicted by the model, but that improving the sensitivity of slaughterhouse screening and the probability that a slaughtered animal's herd of origin can be identified would result in faster detection times. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Palanivel, M.; Uthayakumar, R.
2015-07-01
This paper deals with an economic order quantity (EOQ) model for non-instantaneous deteriorating items with price and advertisement dependent demand pattern under the effect of inflation and time value of money over a finite planning horizon. In this model, shortages are allowed and partially backlogged. The backlogging rate is dependent on the waiting time for the next replenishment. This paper aids the retailer in minimising the total inventory cost by finding the optimal interval and the optimal order quantity. An algorithm is designed to find the optimum solution of the proposed model. Numerical examples are given to demonstrate the results. Also, the effect of changes in the different parameters on the optimal total cost is graphically presented and the implications are discussed in detail.
He, Ju-Xiu; Ohno, Kenji; Tang, Jun; Hattori, Masao; Tani, Tadato; Akao, Teruaki
2014-11-01
To investigate the influence of co-administrated Da-Chaihu-Tang (DCT; a traditional Chinese formulation) on the pharmacokinetics of nifedipine, as well as the safe optimal dosing interval to avoid the adverse interactions. A single dose of DCT was administered with nifedipine simultaneously, 2 h before, 30 min before or 30 min after nifedipine administration. Pharmacokinetics of nifedipine with or without DCT were compared. The influences of DCT on nifedipine intestinal mucosal and hepatic metabolism were studied by using rat in-vitro everted jejunal sac model and hepatic microsomes. A simultaneous co-administration of DCT significantly increased the area under concentration-time curve from time zero to infinity (AUC0-inf ) of nifedipine. In-vitro mechanism investigations revealed that DCT inhibited both the intestinal and the hepatic metabolism of nifedipine. Further study on the optimal dosing interval for nifedipine and DCT revealed that administration of DCT 30 min before or after nifedipine did not significantly change the AUC of nifedipine. The bioavailability of nifedipine is significantly increased by a simultaneous oral co-administration of DCT. This increase is caused by the inhibitory effect of DCT on both the intestinal mucosal and the hepatic metabolism of nifedipine. The dose interval between DCT and nifedipine needs to be set for over 30 min to avoid such drug-drug interactions. © 2014 Royal Pharmaceutical Society.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Minimax confidence intervals in geomagnetism
NASA Technical Reports Server (NTRS)
Stark, Philip B.
1992-01-01
The present paper uses theory of Donoho (1989) to find lower bounds on the lengths of optimally short fixed-length confidence intervals (minimax confidence intervals) for Gauss coefficients of the field of degree 1-12 using the heat flow constraint. The bounds on optimal minimax intervals are about 40 percent shorter than Backus' intervals: no procedure for producing fixed-length confidence intervals, linear or nonlinear, can give intervals shorter than about 60 percent the length of Backus' in this problem. While both methods rigorously account for the fact that core field models are infinite-dimensional, the application of the techniques to the geomagnetic problem involves approximations and counterfactual assumptions about the data errors, and so these results are likely to be extremely optimistic estimates of the actual uncertainty in Gauss coefficients.
Holmes, Emma; Kitterick, Padraig T; Summerfield, A Quentin
2018-04-25
Endogenous attention is typically studied by presenting instructive cues in advance of a target stimulus array. For endogenous visual attention, task performance improves as the duration of the cue-target interval increases up to 800 ms. Less is known about how endogenous auditory attention unfolds over time or the mechanisms by which an instructive cue presented in advance of an auditory array improves performance. The current experiment used five cue-target intervals (0, 250, 500, 1,000, and 2,000 ms) to compare four hypotheses for how preparatory attention develops over time in a multi-talker listening task. Young adults were cued to attend to a target talker who spoke in a mixture of three talkers. Visual cues indicated the target talker's spatial location or their gender. Participants directed attention to location and gender simultaneously ("objects") at all cue-target intervals. Participants were consistently faster and more accurate at reporting words spoken by the target talker when the cue-target interval was 2,000 ms than 0 ms. In addition, the latency of correct responses progressively shortened as the duration of the cue-target interval increased from 0 to 2,000 ms. These findings suggest that the mechanisms involved in preparatory auditory attention develop gradually over time, taking at least 2,000 ms to reach optimal configuration, yet providing cumulative improvements in speech intelligibility as the duration of the cue-target interval increases from 0 to 2,000 ms. These results demonstrate an improvement in performance for cue-target intervals longer than those that have been reported previously in the visual or auditory modalities.
Fernández-Martín, José L; Dusso, Adriana; Martínez-Camblor, Pablo; Dionisi, Maria P; Floege, Jürgen; Ketteler, Markus; London, Gérard; Locatelli, Francesco; Górriz, José L; Rutkowski, Boleslaw; Bos, Willem-Jan; Tielemans, Christian; Martin, Pierre-Yves; Wüthrich, Rudolf P; Pavlovic, Drasko; Benedik, Miha; Rodríguez-Puyol, Diego; Carrero, Juan J; Zoccali, Carmine; Cannata-Andía, Jorge B
2018-05-07
Serum phosphate is a key parameter in the management of chronic kidney disease-mineral and bone disorder (CKD-MBD). The timing of phosphate measurement is not standardized in the current guidelines. Since the optimal range of these biomarkers may vary depending on the duration of the interdialytic interval, in this analysis of the Current management of secondary hyperparathyroidism: a multicentre observational study (COSMOS), we assessed the influence of a 2- (midweek) or 3-day (post-weekend) dialysis interval for blood withdrawal on serum levels of CKD-MBD biomarkers and their association with mortality risk. The COSMOS cohort (6797 patients, CKD Stage 5D) was divided into two groups depending upon midweek or post-weekend blood collection. Univariate and multivariate Cox's models adjusted hazard ratios (HRs) by demographics and comorbidities, treatments and biochemical parameters from a patient/centre database collected at baseline and every 6 months for 3 years. There were no differences in serum calcium or parathyroid hormone levels between midweek and post-weekend patients. However, in post-weekend patients, the mean serum phosphate levels were higher compared with midweek patients (5.5 ± 1.4 versus 5.2 ± 1.4 mg/dL, P < 0.001). Also, the range of serum phosphate with the lowest mortality risk [HR ≤ 1.1; midweek: 3.5-4.9 mg/dL (95% confidence interval, CI: 2.9-5.2 mg/dL); post-weekend: 3.8-5.7 mg/dL (95% CI: 3.0-6.4 mg/dL)] showed significant differences in the upper limit (P = 0.021). Midweek and post-weekend serum phosphate levels and their target ranges associated with the lowest mortality risk differ. Thus, clinical guidelines should consider the timing of blood withdrawal when recommending optimal target ranges for serum phosphate and therapeutic strategies for phosphate control.
Expert systems tools for Hubble Space Telescope observation scheduling
NASA Technical Reports Server (NTRS)
Miller, Glenn; Rosenthal, Don; Cohen, William; Johnston, Mark
1987-01-01
The utility of expert systems techniques for the Hubble Space Telescope (HST) planning and scheduling is discussed and a plan for development of expert system tools which will augment the existing ground system is described. Additional capabilities provided by these tools will include graphics-oriented plan evaluation, long-range analysis of the observation pool, analysis of optimal scheduling time intervals, constructing sequences of spacecraft activities which minimize operational overhead, and optimization of linkages between observations. Initial prototyping of a scheduler used the Automated Reasoning Tool running on a LISP workstation.
A Flexible Toolkit Supporting Knowledge-based Tactical Planning for Ground Forces
2011-06-01
assigned to each of the Special Areas to model its temporal behaviour . In Figure 5 an optimal path going over two defined intermediate points is...which area can be reached by an armoured infantry platoon within a given time interval, which path should be taken by a support unit to minimize...al. 2008]. Although trained commanders and staff personnel may achieve very accurate planning results, time consuming procedures are excluded when
Timing of repetition suppression of event-related potentials to unattended objects.
Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno
2018-05-26
Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Zimmer, A. L.; Minsker, B. S.; Schmidt, A. R.; Ostfeld, A.
2011-12-01
Real-time mitigation of combined sewer overflows (CSOs) requires evaluation of multiple operational strategies during rapidly changing rainfall events. Simulation models for hydraulically complex systems can effectively provide decision support for short time intervals when coupled with efficient optimization. This work seeks to reduce CSOs for a test case roughly based on the North Branch of the Chicago Tunnel and Reservoir Plan (TARP), which is operated by the Metropolitan Water Reclamation District of Greater Chicago (MWRDGC). The North Branch tunnel flows to a junction with the main TARP system. The Chicago combined sewer system alleviates potential CSOs by directing high interceptor flows through sluice gates and dropshafts to a deep tunnel. Decision variables to control CSOs consist of sluice gate positions that control water flow to the tunnel as well as a treatment plant pumping rate that lowers interceptor water levels. A physics-based numerical model is used to simulate the hydraulic effects of changes in the decision variables. The numerical model is step-wise steady and conserves water mass and momentum at each time step by iterating through a series of look-up tables. The look-up tables are constructed offline to avoid extensive real-time calculations, and describe conduit storage and water elevations as a function of flow. A genetic algorithm (GA) is used to minimize CSOs at each time interval within a moving horizon framework. Decision variables are coded at 15-minute increments and GA solutions are two hours in duration. At each 15-minute interval, the algorithm identifies a good solution for a two-hour rainfall forecast. Three GA modifications help reduce optimization time. The first adjustment reduces the search alphabet by eliminating sluice gate positions that do not influence overflow volume. The second GA retains knowledge of the best decision at the previous interval by shifting the genes in the best previous sequence to initialize search at the new interval. The third approach is a micro-GA with a small population size and high diversity. Current tunnel operations attempt to avoid dropshaft geysers by simultaneously closing all sluice gates when the downstream end of the deep tunnel pressurizes. In an effort to further reduce CSOs, this research introduces a constraint that specifies a maximum allowable tunnel flow to prevent pressurization. The downstream junction depth is bounded by two flow conditions: a low tunnel water level represents inflow from the main system only, while a higher level includes main system flow as well as all possible North Branch inflow. If the lower of the two tunnel levels is pressurized, no North Branch flow is allowed to enter the junction. If only the higher level pressurizes, a linear rating is used to restrict the total North Branch flow below the volume that pressurizes the boundary. The numerical model is successfully calibrated to EPA SWMM and efficiently portrays system hydraulics in real-time. Results on the three GA approaches as well as impacts of various policies for the downstream constraint will be presented at the conference.
The role of musical training in emergent and event-based timing.
Baer, L H; Thibodeau, J L N; Gralnick, T M; Li, K Z H; Penhune, V B
2013-01-01
Musical performance is thought to rely predominantly on event-based timing involving a clock-like neural process and an explicit internal representation of the time interval. Some aspects of musical performance may rely on emergent timing, which is established through the optimization of movement kinematics, and can be maintained without reference to any explicit representation of the time interval. We predicted that musical training would have its largest effect on event-based timing, supporting the dissociability of these timing processes and the dominance of event-based timing in musical performance. We compared 22 musicians and 17 non-musicians on the prototypical event-based timing task of finger tapping and on the typically emergently timed task of circle drawing. For each task, participants first responded in synchrony with a metronome (Paced) and then responded at the same rate without the metronome (Unpaced). Analyses of the Unpaced phase revealed that non-musicians were more variable in their inter-response intervals for finger tapping compared to circle drawing. Musicians did not differ between the two tasks. Between groups, non-musicians were more variable than musicians for tapping but not for drawing. We were able to show that the differences were due to less timer variability in musicians on the tapping task. Correlational analyses of movement jerk and inter-response interval variability revealed a negative association for tapping and a positive association for drawing in non-musicians only. These results suggest that musical training affects temporal variability in tapping but not drawing. Additionally, musicians and non-musicians may be employing different movement strategies to maintain accurate timing in the two tasks. These findings add to our understanding of how musical training affects timing and support the dissociability of event-based and emergent timing modes.
Timing and Causality in the Generation of Learned Eyelid Responses
Sánchez-Campusano, Raudel; Gruart, Agnès; Delgado-García, José M.
2011-01-01
The cerebellum-red nucleus-facial motoneuron (Mn) pathway has been reported as being involved in the proper timing of classically conditioned eyelid responses. This special type of associative learning serves as a model of event timing for studying the role of the cerebellum in dynamic motor control. Here, we have re-analyzed the firing activities of cerebellar posterior interpositus (IP) neurons and orbicularis oculi (OO) Mns in alert behaving cats during classical eyeblink conditioning, using a delay paradigm. The aim was to revisit the hypothesis that the IP neurons (IPns) can be considered a neuronal phase-modulating device supporting OO Mns firing with an emergent timing mechanism and an explicit correlation code during learned eyelid movements. Optimized experimental and computational tools allowed us to determine the different causal relationships (temporal order and correlation code) during and between trials. These intra- and inter-trial timing strategies expanding from sub-second range (millisecond timing) to longer-lasting ranges (interval timing) expanded the functional domain of cerebellar timing beyond motor control. Interestingly, the results supported the above-mentioned hypothesis. The causal inferences were influenced by the precise motor and pre-motor spike timing in the cause-effect interval, and, in addition, the timing of the learned responses depended on cerebellar–Mn network causality. Furthermore, the timing of CRs depended upon the probability of simulated causal conditions in the cause-effect interval and not the mere duration of the inter-stimulus interval. In this work, the close relation between timing and causality was verified. It could thus be concluded that the firing activities of IPns may be related more to the proper performance of ongoing CRs (i.e., the proper timing as a consequence of the pertinent causality) than to their generation and/or initiation. PMID:21941469
Neonatal stomach volume and physiology suggest feeding at 1-h intervals.
Bergman, Nils J
2013-08-01
There is insufficient evidence on optimal neonatal feeding intervals, with a wide range of practices. The stomach capacity could determine feeding frequency. A literature search was conducted for studies reporting volumes or dimensions of stomach capacity before or after birth. Six articles were found, suggesting a stomach capacity of 20 mL at birth. A stomach capacity of 20 mL translates to a feeding interval of approximately 1 h for a term neonate. This corresponds to the gastric emptying time for human milk, as well as the normal neonatal sleep cycle. Larger feeding volumes at longer intervals may therefore be stressful and the cause of spitting up, reflux and hypoglycaemia. Outcomes for low birthweight infants could possibly be improved if stress from overfeeding was avoided while supporting the development of normal gastrointestinal physiology. Cycles between feeding and sleeping at 1-h intervals likely meet the evolutionary expectations of human neonates. ©2013 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks
Lam, William H. K.; Li, Qingquan
2017-01-01
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks. PMID:29210978
Heterogeneous Data Fusion Method to Estimate Travel Time Distributions in Congested Road Networks.
Shi, Chaoyang; Chen, Bi Yu; Lam, William H K; Li, Qingquan
2017-12-06
Travel times in congested urban road networks are highly stochastic. Provision of travel time distribution information, including both mean and variance, can be very useful for travelers to make reliable path choice decisions to ensure higher probability of on-time arrival. To this end, a heterogeneous data fusion method is proposed to estimate travel time distributions by fusing heterogeneous data from point and interval detectors. In the proposed method, link travel time distributions are first estimated from point detector observations. The travel time distributions of links without point detectors are imputed based on their spatial correlations with links that have point detectors. The estimated link travel time distributions are then fused with path travel time distributions obtained from the interval detectors using Dempster-Shafer evidence theory. Based on fused path travel time distribution, an optimization technique is further introduced to update link travel time distributions and their spatial correlations. A case study was performed using real-world data from Hong Kong and showed that the proposed method obtained accurate and robust estimations of link and path travel time distributions in congested road networks.
Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats
Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.
2012-01-01
This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.
An approach to optimal semi-active control of vibration energy harvesting based on MEMS
NASA Astrophysics Data System (ADS)
Rojas, Rafael A.; Carcaterra, Antonio
2018-07-01
In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.
Using operations research to plan improvement of the transport of critically ill patients.
Chen, Jing; Awasthi, Anjali; Shechter, Steven; Atkins, Derek; Lemke, Linda; Fisher, Les; Dodek, Peter
2013-01-01
Operations research is the application of mathematical modeling, statistical analysis, and mathematical optimization to understand and improve processes in organizations. The objective of this study was to illustrate how the methods of operations research can be used to identify opportunities to reduce the absolute value and variability of interfacility transport intervals for critically ill patients. After linking data from two patient transport organizations in British Columbia, Canada, for all critical care transports during the calendar year 2006, the steps for transfer of critically ill patients were tabulated into a series of time intervals. Statistical modeling, root-cause analysis, Monte Carlo simulation, and sensitivity analysis were used to test the effect of changes in component intervals on overall duration and variation of transport times. Based on quality improvement principles, we focused on reducing the 75th percentile and standard deviation of these intervals. We analyzed a total of 3808 ground and air transports. Constraining time spent by transport personnel at sending and receiving hospitals was projected to reduce the total time taken by 33 minutes with as much as a 20% reduction in standard deviation of these transport intervals in 75% of ground transfers. Enforcing a policy of requiring acceptance of patients who have life- or limb-threatening conditions or organ failure was projected to reduce the standard deviation of air transport time by 63 minutes and the standard deviation of ground transport time by 68 minutes. Based on findings from our analyses, we developed recommendations for technology renovation, personnel training, system improvement, and policy enforcement. Use of the tools of operations research identifies opportunities for improvement in a complex system of critical care transport.
NASA Technical Reports Server (NTRS)
Mu, Qiaozhen; Wu, Aisheng; Xiong, Xiaoxiong; Doelling, David R.; Angal, Amit; Chang, Tiejun; Bhatt, Rajendra
2017-01-01
MODIS reflective solar bands are calibrated on-orbit using a solar diffuser and near-monthly lunar observations. To monitor the performance and effectiveness of the on-orbit calibrations, pseudo-invariant targets such as deep convective clouds (DCCs), Libya-4, and Dome-C are used to track the long-term stability of MODIS Level 1B product. However, the current MODIS operational DCC technique (DCCT) simply uses the criteria set for the 0.65- m band. We optimize several critical DCCT parameters including the 11- micrometer IR-band Brightness Temperature (BT11) threshold for DCC identification, DCC core size and uniformity to help locate DCCs at convection centers, data collection time interval, and probability distribution function (PDF) bin increment for each channel. The mode reflectances corresponding to the PDF peaks are utilized as the DCC reflectances. Results show that the BT11 threshold and time interval are most critical for the Short Wave Infrared (SWIR) bands. The Bidirectional Reflectance Distribution Function model is most effective in reducing the DCC anisotropy for the visible channels. The uniformity filters and PDF bin size have minimal impacts on the visible channels and a larger impact on the SWIR bands. The newly optimized DCCT will be used for future evaluation of MODIS on-orbit calibration by MODIS Characterization Support Team.
Evaluation of optimal configuration of hybrid Life Support System for Space.
Bartsev, S I; Mezhevikin, V V; Okhonin, V A
2000-01-01
Any comprehensive evaluation of Life Support Systems (LSS) for space applications has to be conducted taking into account not only mass of LSS components but also all relevant equipment and storage: spare parts, additional mass of space ship walls, power supply and heat rejection systems. In this paper different combinations of hybrid LSS (HLSS) components were evaluated. Three variants of power supply were under consideration--solar arrays, direct solar light transmission to plants, and nuclear power. The software based on simplex approach was used for optimizing LSS configuration with respect to its mass. It was shown that there are several LSS configuration, which are optimal for different time intervals. Optimal configurations of physical-chemical (P/C), biological and hybrid LSS for three types of power supply are presented.
Optimization of turning process through the analytic flank wear modelling
NASA Astrophysics Data System (ADS)
Del Prete, A.; Franchi, R.; De Lorenzis, D.
2018-05-01
In the present work, the approach used for the optimization of the process capabilities for Oil&Gas components machining will be described. These components are machined by turning of stainless steel castings workpieces. For this purpose, a proper Design Of Experiments (DOE) plan has been designed and executed: as output of the experimentation, data about tool wear have been collected. The DOE has been designed starting from the cutting speed and feed values recommended by the tools manufacturer; the depth of cut parameter has been maintained as a constant. Wear data has been obtained by means the observation of the tool flank wear under an optical microscope: the data acquisition has been carried out at regular intervals of working times. Through a statistical data and regression analysis, analytical models of the flank wear and the tool life have been obtained. The optimization approach used is a multi-objective optimization, which minimizes the production time and the number of cutting tools used, under the constraint on a defined flank wear level. The technique used to solve the optimization problem is a Multi Objective Particle Swarm Optimization (MOPS). The optimization results, validated by the execution of a further experimental campaign, highlighted the reliability of the work and confirmed the usability of the optimized process parameters and the potential benefit for the company.
Symbol interval optimization for molecular communication with drift.
Kim, Na-Rae; Eckford, Andrew W; Chae, Chan-Byoung
2014-09-01
In this paper, we propose a symbol interval optimization algorithm in molecular communication with drift. Proper symbol intervals are important in practical communication systems since information needs to be sent as fast as possible with low error rates. There is a trade-off, however, between symbol intervals and inter-symbol interference (ISI) from Brownian motion. Thus, we find proper symbol interval values considering the ISI inside two kinds of blood vessels, and also suggest no ISI system for strong drift models. Finally, an isomer-based molecule shift keying (IMoSK) is applied to calculate achievable data transmission rates (achievable rates, hereafter). Normalized achievable rates are also obtained and compared in one-symbol ISI and no ISI systems.
Gray, R H; Simpson, J L; Kambic, R T; Queenan, J T; Mena, P; Perez, A; Barbato, M
1995-05-01
Our purpose was to ascertain the effects of timing of conception on the risk of spontaneous abortion. To assess these effects, women who conceived while using natural family planning were identified in five centers worldwide between 1987 and 1993. Timing of conception was determined from 868 natural family planning charts that recorded day of intercourse and indices of ovulation (cervical mucus peak obtained according to the ovulation method and/or basal body temperature). Conceptions on days - 1 or 0 with respect to the natural family planning estimated day of ovulation were considered to be "optimally timed," and all other conceptions were considered as "non-optimally timed." The rate of spontaneous abortions per 100 pregnancies was examined in relation to timing of conception, ages, reproductive history, and other covariates with bivariate and multivariate statistical methods. There were 88 spontaneous abortions among 868 pregnancies (10.1%). The spontaneous abortion rate was similar for 361 optimally timed conceptions (9.1%) and 507 non-optimally timed conceptions (10.9%). However, among 171 women who had experienced a spontaneous abortion in a prior pregnancy, the rate of spontaneous abortion in the index pregnancy was significantly higher with non-optimally timed conceptions (22.6%) as compared with optimally timed conceptions (7.3%). This association was not observed among 697 women with no history of pregnancy loss. The adjusted relative risk of spontaneous abortion among women with non-optimally timed conceptions and a history of pregnancy loss was 2.35 (95% confidence intervals 1.42 to 3.89). The excess risk of spontaneous abortion was observed with both preovulatory and postovulatory conceptions. Overall, there is no excess risk of spontaneous abortion among the pregnancies conceived during natural family planning use. However, among women with a history of pregnancy loss, there is an increased risk of spontaneous abortion associated with preovulatory or postovulatory delayed conceptions.
Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-31
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.
Yu, Tao; Kang, Chao; Zhao, Pan
2018-01-01
The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048
NASA Technical Reports Server (NTRS)
Baxley, Brian T.; Murdoch, Jennifer L.; Swieringa, Kurt A.; Barmore, Bryan E.; Capron, William R.; Hubbs, Clay E.; Shay, Richard F.; Abbott, Terence S.
2013-01-01
The predicted increase in the number of commercial aircraft operations creates a need for improved operational efficiency. Two areas believed to offer increases in aircraft efficiency are optimized profile descents and dependent parallel runway operations. Using Flight deck Interval Management (FIM) software and procedures during these operations, flight crews can achieve by the runway threshold an interval assigned by air traffic control (ATC) behind the preceding aircraft that maximizes runway throughput while minimizing additional fuel consumption and pilot workload. This document describes an experiment where 24 pilots flew arrivals into the Dallas Fort-Worth terminal environment using one of three simulators at NASA?s Langley Research Center. Results indicate that pilots delivered their aircraft to the runway threshold within +/- 3.5 seconds of their assigned time interval, and reported low workload levels. In general, pilots found the FIM concept, procedures, speeds, and interface acceptable. Analysis of the time error and FIM speed changes as a function of arrival stream position suggest the spacing algorithm generates stable behavior while in the presence of continuous (wind) or impulse (offset) error. Concerns reported included multiple speed changes within a short time period, and an airspeed increase followed shortly by an airspeed decrease.
Gremeaux, Vincent; Drigny, Joffrey; Nigam, Anil; Juneau, Martin; Guilbeault, Valérie; Latour, Elise; Gayda, Mathieu
2012-11-01
The aim of this study was to study the impact of a combined long-term lifestyle and high-intensity interval training intervention on body composition, cardiometabolic risk, and exercise tolerance in overweight and obese subjects. Sixty-two overweight and obese subjects (53.3 ± 9.7 yrs; mean body mass index, 35.8 ± 5 kg/m(2)) were retrospectively identified at their entry into a 9-mo program consisting of individualized nutritional counselling, optimized high-intensity interval exercise, and resistance training two to three times a week. Anthropometric measurements, cardiometabolic risk factors, and exercise tolerance were measured at baseline and program completion. Adherence rate was 97%, and no adverse events occurred with high-intensity interval exercise training. Exercise training was associated with a weekly energy expenditure of 1582 ± 284 kcal. Clinically and statistically significant improvements were observed for body mass (-5.3 ± 5.2 kg), body mass index (-1.9 ± 1.9 kg/m(2)), waist circumference (-5.8 ± 5.4 cm), and maximal exercise capacity (+1.26 ± 0.84 metabolic equivalents) (P < 0.0001 for all parameters). Total fat mass and trunk fat mass, lipid profile, and triglyceride/high-density lipoprotein ratio were also significantly improved (P < 0.0001). At program completion, the prevalence of metabolic syndrome was reduced by 32.5% (P < 0.05). Independent predictors of being a responder to body mass and waist circumference loss were baseline body mass index and resting metabolic rate; those for body mass index decrease were baseline waist circumference and triglyceride/high-density lipoprotein cholesterol ratio. A long-term lifestyle intervention with optimized high-intensity interval exercise improves body composition, cardiometabolic risk, and exercise tolerance in obese subjects. This intervention seems safe, efficient, and well tolerated and could improve adherence to exercise training in this population.
Balasubramonian, Rajeev [Sandy, UT; Dwarkadas, Sandhya [Rochester, NY; Albonesi, David [Ithaca, NY
2012-01-24
In a processor having multiple clusters which operate in parallel, the number of clusters in use can be varied dynamically. At the start of each program phase, the configuration option for an interval is run to determine the optimal configuration, which is used until the next phase change is detected. The optimum instruction interval is determined by starting with a minimum interval and doubling it until a low stability factor is reached.
Swords, Douglas S; Zhang, Chong; Presson, Angela P; Firpo, Matthew A; Mulvihill, Sean J; Scaife, Courtney L
2018-04-01
Time-to-surgery from cancer diagnosis has increased in the United States. We aimed to determine the association between time-to-surgery and oncologic outcomes in patients with resectable pancreatic ductal adenocarcinoma undergoing upfront surgery. The 2004-2012 National Cancer Database was reviewed for patients undergoing curative-intent surgery without neoadjuvant therapy for clinical stage I-II pancreatic ductal adenocarcinoma. A multivariable Cox model with restricted cubic splines was used to define time-to-surgery as short (1-14 days), medium (15-42), and long (43-120). Overall survival was examined using Cox shared frailty models. Secondary outcomes were examined using mixed-effects logistic regression models. Of 16,763 patients, time-to-surgery was short in 34.4%, medium in 51.6%, and long in 14.0%. More short time-to-surgery patients were young, privately insured, healthy, and treated at low-volume hospitals. Adjusted hazards of mortality were lower for medium (hazard ratio 0.94, 95% confidence interval, .90, 0.97) and long time-to-surgery (hazard ratio 0.91, 95% confidence interval, 0.86, 0.96) than short. There were no differences in adjusted odds of node positivity, clinical to pathologic upstaging, being unresectable or stage IV at exploration, and positive margins. Medium time-to-surgery patients had higher adjusted odds (odds ratio 1.11, 95% confidence interval, 1.03, 1.20) of receiving an adequate lymphadenectomy than short. Ninety-day mortality was lower in medium (odds ratio 0.75, 95% confidence interval, 0.65, 0.85) and long time-to-surgery (odds ratio 0.72, 95% confidence interval, 0.60, 0.88) than short. In this observational analysis, short time-to-surgery was associated with slightly shorter OS and higher perioperative mortality. These results may suggest that delays for medical optimization and referral to high volume surgeons are safe. Published by Elsevier Inc.
Robotic fish tracking method based on suboptimal interval Kalman filter
NASA Astrophysics Data System (ADS)
Tong, Xiaohong; Tang, Chao
2017-11-01
Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.
Simulation of lithium ion battery replacement in a battery pack for application in electric vehicles
NASA Astrophysics Data System (ADS)
Mathew, M.; Kong, Q. H.; McGrory, J.; Fowler, M.
2017-05-01
The design and optimization of the battery pack in an electric vehicle (EV) is essential for continued integration of EVs into the global market. Reconfigurable battery packs are of significant interest lately as they allow for damaged cells to be removed from the circuit, limiting their impact on the entire pack. This paper provides a simulation framework that models a battery pack and examines the effect of replacing damaged cells with new ones. The cells within the battery pack vary stochastically and the performance of the entire pack is evaluated under different conditions. The results show that by changing out cells in the battery pack, the state of health of the pack can be consistently maintained above a certain threshold value selected by the user. In situations where the cells are checked for replacement at discrete intervals, referred to as maintenance event intervals, it is found that the length of the interval is dependent on the mean time to failure of the individual cells. The simulation framework as well as the results from this paper can be utilized to better optimize lithium ion battery pack design in EVs and make long term deployment of EVs more economically feasible.
Aziz, Abdul Rashid; Chia, Michael Yong Hwa; Low, Chee Yong; Slater, Gary John; Png, Weileen; Teh, Kong Chuan
2012-10-01
This study examines the effects of Ramadan fasting on performance during an intense exercise session performed at three different times of the day, i.e., 08:00, 18:00, and 21:00 h. The purpose was to determine the optimal time of the day to perform an acute high-intensity interval exercise during the Ramadan fasting month. After familiarization, nine trained athletes performed six 30-s Wingate anaerobic test (WAnT) cycle bouts followed by a time-to-exhaustion (T(exh)) cycle on six separate randomized and counterbalanced occasions. The three time-of-day nonfasting (control, CON) exercise sessions were performed before the Ramadan month, and the three corresponding time-of-day Ramadan fasting (RAM) exercise sessions were performed during the Ramadan month. Note that the 21:00 h session during Ramadan month was conducted in the nonfasted state after the breaking of the day's fast. Total work (TW) completed during the six WAnT bouts was significantly lower during RAM compared to CON for the 08:00 and 18:00 h (p < .017; effect size [d] = .55 [small] and .39 [small], respectively) sessions, but not for the 21:00 h (p = .03, d = .18 [trivial]) session. The T(exh) cycle duration was significantly shorter during RAM than CON in the 18:00 (p < .017, d = .93 [moderate]) session, but not in the 08:00 (p = .03, d = .57 [small]) and 21:00 h (p = .96, d = .02 [trivial]) sessions. In conclusion, Ramadan fasting had a small to moderate, negative impact on quality of performance during an acute high-intensity exercise session, particularly during the period of the daytime fast. The optimal time to conduct an acute high-intensity exercise session during the Ramadan fasting month is in the evening, after the breaking of the day's fast.
Radar - ESRL Wind Profiler with RASS, Wasco Airport - Derived Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCaffrey, Katherine
Profiles of turbulence dissipation rate for 15-minute intervals, time-stamped at the beginning of the 15-minute period, during the final 30 minutes of each hour. During that time, the 915-MHz wind profiling radar was in an optimized configuration with a vertically pointing beam only for measuring accurate spectral widths of vertical velocity. A bias-corrected dissipation rate also was profiled (described in McCaffrey et al. 2017). Hourly files contain two 15-minute profiles.
Validation of accelerometer wear and nonwear time classification algorithm.
Choi, Leena; Liu, Zhouwen; Matthews, Charles E; Buchowski, Maciej S
2011-02-01
the use of movement monitors (accelerometers) for measuring physical activity (PA) in intervention and population-based studies is becoming a standard methodology for the objective measurement of sedentary and active behaviors and for the validation of subjective PA self-reports. A vital step in PA measurement is the classification of daily time into accelerometer wear and nonwear intervals using its recordings (counts) and an accelerometer-specific algorithm. the purpose of this study was to validate and improve a commonly used algorithm for classifying accelerometer wear and nonwear time intervals using objective movement data obtained in the whole-room indirect calorimeter. we conducted a validation study of a wear or nonwear automatic algorithm using data obtained from 49 adults and 76 youth wearing accelerometers during a strictly monitored 24-h stay in a room calorimeter. The accelerometer wear and nonwear time classified by the algorithm was compared with actual wearing time. Potential improvements to the algorithm were examined using the minimum classification error as an optimization target. the recommended elements in the new algorithm are as follows: 1) zero-count threshold during a nonwear time interval, 2) 90-min time window for consecutive zero or nonzero counts, and 3) allowance of 2-min interval of nonzero counts with the upstream or downstream 30-min consecutive zero-count window for detection of artifactual movements. Compared with the true wearing status, improvements to the algorithm decreased nonwear time misclassification during the waking and the 24-h periods (all P values < 0.001). the accelerometer wear or nonwear time algorithm improvements may lead to more accurate estimation of time spent in sedentary and active behaviors.
Human's choices in situations of time-based diminishing returns.
Hackenberg, T D; Axtell, S A
1993-01-01
Three experiments examined adult humans' choices in situations with contrasting short-term and long-term consequences. Subjects were given repeated choices between two time-based schedules of points exchangeable for money: a fixed schedule and a progressive schedule that began at 0 s and increased by 5 s with each point delivered by that schedule. Under "reset" conditions, choosing the fixed schedule not only produced a point but it also reset the requirements of the progressive schedule to 0 s. In the first two experiments, reset conditions alternated with "no-reset" conditions, in which progressive-schedule requirements were independent of fixed-schedule choices. Experiment 1 entailed choices between a progressive-interval schedule and a fixed-interval schedule, the duration of which varied across conditions. Switching from the progressive- to the fixed-interval schedule was systematically related to fixed-interval size in 4 of 8 subjects, and in all subjects occurred consistently sooner in the progressive-schedule sequence under reset than under no-reset procedures. The latter result was replicated in a second experiment, in which choices between progressive- and fixed-interval schedules were compared with choices between progressive- and fixed-time schedules. In Experiment 3, switching patterns under reset conditions were unrelated to variations in intertrial interval. In none of the experiments did orderly choice patterns depend on verbal descriptions of the contingencies or on schedule-controlled response patterns in the presence of the chosen schedules. The overall pattern of results indicates control of choices by temporarily remote consequences, and is consistent with versions of optimality theory that address performance in situations of diminishing returns. PMID:8315364
4D seismic monitoring of the miscible CO2 flood of Hall-Gurney Field, Kansas, U.S
Raef, A.E.; Miller, R.D.; Byrnes, A.P.; Harrison, W.E.
2004-01-01
A cost-effective, highly repeatable, 4D-optimized, single-pattern/patch seismic data-acquisition approach with several 3D data sets was used to evaluate the feasibility of imaging changes associated with the " water alternated with gas" (WAG) stage. By incorporating noninversion-based seismic-attribute analysis, the time and cost of processing and interpreting the data were reduced. A 24-ms-thick EOR-CO 2 injection interval-using an average instantaneous frequency attribute (AIF) was targeted. Changes in amplitude response related to decrease in velocity from pore-fluid replacement within this time interval were found to be lower relative to background values than in AIF analysis. Carefully color-balanced AIF-attribute maps established the overall area affected by the injected EOR-CO2.
Super-optimal CO2 reduces seed yield but not vegetative growth in wheat
NASA Technical Reports Server (NTRS)
Grotenhuis, T. P.; Bugbee, B.
1997-01-01
Although terrestrial atmospheric CO2 levels will not reach 1000 micromoles mol-1 (0.1%) for decades, CO2 levels in growth chambers and greenhouses routinely exceed that concentration. CO2 levels in life support systems in space can exceed 10000 micromoles mol-1(1%). Numerous studies have examined CO2 effects up to 1000 micromoles mol-1, but biochemical measurements indicate that the beneficial effects of CO2 can continue beyond this concentration. We studied the effects of near-optimal (approximately 1200 micromoles mol-1) and super-optimal CO2 levels (2400 micromoles mol-1) on yield of two cultivars of hydroponically grown wheat (Triticum aestivum L.) in 12 trials in growth chambers. Increasing CO2 from sub-optimal to near-optimal (350-1200 micromoles mol-1) increased vegetative growth by 25% and seed yield by 15% in both cultivars. Yield increases were primarily the result of an increased number of heads per square meter. Further elevation of CO2 to 2500 micromoles mol-1 reduced seed yield by 22% (P < 0.001) in cv. Veery-10 and by 15% (P < 0.001) in cv. USU-Apogee. Super-optimal CO2 did not decrease the number of heads per square meter, but reduced seeds per head by 10% and mass per seed by 11%. The toxic effect of CO2 was similar over a range of light levels from half to full sunlight. Subsequent trials revealed that super-optimal CO2 during the interval between 2 wk before and after anthesis mimicked the effect of constant super-optimal CO2. Furthermore, near-optimal CO2 during the same interval mimicked the effect of constant near-optimal CO2. Nutrient concentration of leaves and heads was not affected by CO2. These results suggest that super-optimal CO2 inhibits some process that occurs near the time of seed set resulting in decreased seed set, seed mass, and yield.
Bashir, Muhammad Mustehsan; Qayyum, Rehan; Saleem, Muhammad Hammad; Siddique, Kashif; Khan, Farid Ahmad
2015-08-01
To determine the optimal time interval between tumescent local anesthesia infiltration and the start of hand surgery without a tourniquet for improved operative field visibility. Patients aged 16 to 60 years who needed contracture release and tendon repair in the hand were enrolled from the outpatient clinic. Patients were randomized to 10-, 15-, or 25-minute intervals between tumescent anesthetic solution infiltration (0.18% lidocaine and 1:221,000 epinephrine) and the start of surgery. The end point of tumescence anesthetic infiltration was pale and firm skin. The surgical team was blinded to the time of anesthetic infiltration. At the completion of the procedure, the surgeon and the first assistant rated the operative field visibility as excellent, fair, or poor. We used logistic regression models without and with adjustment for confounding variables. Of the 75 patients enrolled in the study, 59 (79%) were males, 7 were randomized to 10-minute time intervals (further randomization was stopped after interim analysis found consistently poor operative field visibility), and 34 were randomized to the each of the 15- and 25-minute groups. Patients who were randomized to the 25-minute delay group had 29 times higher odds of having an excellent operative visual field than those randomized to the 15-minute delay group. After adjusting for age, sex, amount of tumescent solution infiltration, and duration of operation, the odds ratio remained highly significant. We found that an interval of 25 minutes provides vastly superior operative field visibility; 10-minute delay had the poorest results. Therapeutic I. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Jodice, Patrick G.R.; Collopy, Michael W.
1999-01-01
The diving behavior of Marbled Murrelets (Brachyramphus marmoratus) was studied using telemetry along the Oregon coast during the 1995 and 1996 breeding seasons and examined in relation to predictions from optimal-breathing models. Duration of dives, pauses, dive bouts, time spent under water during dive bouts, and nondiving intervals between successive dive bouts were recorded. Most diving metrics differed between years but not with oceanographic conditions or shore type. There was no effect of water depth on mean dive time or percent time spent under water even though dive bouts occurred in depths from 3 to 36 m. There was a significant, positive relationship between mean dive time and mean pause time at the dive-bout scale each year. At the dive-cycle scale, there was a significant positive relationship between dive time and preceding pause time in each year and a significant positive relationship between dive time and ensuing pause time in 1996. Although it appears that aerobic diving was the norm, there appeared to be an increase in anaerobic diving in 1996. The diving performance of Marbled Murrelets in this study appeared to be affected by annual changes in environmental conditions and prey resources but did not consistently fit predictions from optimal-breathing models.
Stochastic simulation and analysis of biomolecular reaction networks
Frazier, John M; Chushak, Yaroslav; Foy, Brent
2009-01-01
Background In recent years, several stochastic simulation algorithms have been developed to generate Monte Carlo trajectories that describe the time evolution of the behavior of biomolecular reaction networks. However, the effects of various stochastic simulation and data analysis conditions on the observed dynamics of complex biomolecular reaction networks have not recieved much attention. In order to investigate these issues, we employed a a software package developed in out group, called Biomolecular Network Simulator (BNS), to simulate and analyze the behavior of such systems. The behavior of a hypothetical two gene in vitro transcription-translation reaction network is investigated using the Gillespie exact stochastic algorithm to illustrate some of the factors that influence the analysis and interpretation of these data. Results Specific issues affecting the analysis and interpretation of simulation data are investigated, including: (1) the effect of time interval on data presentation and time-weighted averaging of molecule numbers, (2) effect of time averaging interval on reaction rate analysis, (3) effect of number of simulations on precision of model predictions, and (4) implications of stochastic simulations on optimization procedures. Conclusion The two main factors affecting the analysis of stochastic simulations are: (1) the selection of time intervals to compute or average state variables and (2) the number of simulations generated to evaluate the system behavior. PMID:19534796
Optimal Quantum Spatial Search on Random Temporal Networks
NASA Astrophysics Data System (ADS)
Chakraborty, Shantanav; Novo, Leonardo; Di Giorgio, Serena; Omar, Yasser
2017-12-01
To investigate the performance of quantum information tasks on networks whose topology changes in time, we study the spatial search algorithm by continuous time quantum walk to find a marked node on a random temporal network. We consider a network of n nodes constituted by a time-ordered sequence of Erdös-Rényi random graphs G (n ,p ), where p is the probability that any two given nodes are connected: After every time interval τ , a new graph G (n ,p ) replaces the previous one. We prove analytically that, for any given p , there is always a range of values of τ for which the running time of the algorithm is optimal, i.e., O (√{n }), even when search on the individual static graphs constituting the temporal network is suboptimal. On the other hand, there are regimes of τ where the algorithm is suboptimal even when each of the underlying static graphs are sufficiently connected to perform optimal search on them. From this first study of quantum spatial search on a time-dependent network, it emerges that the nontrivial interplay between temporality and connectivity is key to the algorithmic performance. Moreover, our work can be extended to establish high-fidelity qubit transfer between any two nodes of the network. Overall, our findings show that one can exploit temporality to achieve optimal quantum information tasks on dynamical random networks.
Physiological Responses to On-Court vs Running Interval Training in Competitive Tennis Players
Fernandez-Fernandez, Jaime; Sanz-Rivas, David; Sanchez-Muñoz, Cristobal; de la Aleja Tellez, Jose Gonzalez; Buchheit, Martin; Mendez-Villanueva, Alberto
2011-01-01
The aim of this study was to compare heart rate (HR), blood lactate (LA) and rate of perceived exertion (RPE) responses to a tennis-specific interval training (i.e., on-court) session with that of a matched-on-time running interval training (i.e., off-court). Eight well-trained, male (n = 4) and female (n = 4) tennis players (mean ± SD; age: 16.4 ± 1.8 years) underwent an incremental test where peak treadmill speed, maximum HR (HRmax) and maximum oxygen uptake (VO2max) were determined. The two interval training protocols (i.e., off- court and on-court) consisted of 4 sets of 120 s of work, interspersed with 90 s rest. Percentage of HRmax (95.9 ± 2.4 vs. 96.1 ± 2.2%; p = 0.79), LA (6.9 ± 2.5 vs. 6.2 ± 2.4 mmol·L-1; p = 0.14) and RPE (16.7 ± 2.1 vs. 16.3 ± 1.8; p = 0.50) responses were similar for off-court and on-court, respectively. The two interval training protocols used in the present study have equivalent physiological responses. Longitudinal studies are still warranted but tennis-specific interval training sessions could represent a time-efficient alternative to off-court (running) interval training for the optimization of the specific cardiorespiratory fitness in tennis players. Key points On-court interval training protocol can be used as an alternative to running interval training Technical/tactical training should be performed under conditions that replicate the physical and technical demands of a competitive match During the competitive season tennis on-court training might be preferred to off-court training PMID:24150630
Goodman, Susan M
2015-05-01
Patients with rheumatoid arthritis continue to undergo arthroplasty despite widespread use of potent disease-modifying drugs (DMARDs), including the biologic tumor necrosis-α inhibitors. In fact, over 80 % of RA patients are taking DMARDs or biologics at the time of arthroplasty. While many RA-specific factors including disease activity and disability may contribute to the increase in infection in RA patients undergoing arthroplasty, immunosuppressant medications may also play a role. As the age of patients with RA undergoing arthroplasty is rising, and the incidence of arthroplasty among the older population is increasing, optimal perioperative management of DMARDs and biologics in older patients with RA is an increasing challenge. Although evidence is sparse, most evidence supports withholding tumor necrosis-α inhibitors and other biologics prior to surgery based on the dosing interval, and continuing methotrexate and hydroxychloroquine through the perioperative period. There is no consensus regarding leflunomide, and rituximab risk does not appear related to the interval between infusion and surgery. This paper reviews arthroplasty outcomes including complications in patients with RA, and discusses the rationale for strategies for the optimal medication management of DMARDs and biologics in the perioperative period to minimize complications and improve outcomes.
Fuel optimal maneuvers of spacecraft about a circular orbit
NASA Technical Reports Server (NTRS)
Carter, T. E.
1982-01-01
Fuel optimal maneuvers of spacecraft relative to a body in circular orbit are investigated using a point mass model in which the magnitude of the thrust vector is bounded. All nonsingular optimal maneuvers consist of intervals of full thrust and coast and are found to contain at most seven such intervals in one period. Only four boundary conditions where singular solutions occur are possible. Computer simulation of optimal flight path shapes and switching functions are found for various boundary conditions. Emphasis is placed on the problem of soft rendezvous with a body in circular orbit.
Altman, Alon D; Nelson, Gregg; Chu, Pamela; Nation, Jill; Ghatage, Prafull
2012-06-01
The objective of this study was to examine both overall and disease-free survival of patients with advanced stage ovarian cancer after immediate or interval debulking surgery based on residual disease. We performed a retrospective chart review at the Tom Baker Cancer Centre in Calgary, Alberta of patients with pathologically confirmed stage III or IV ovarian cancer, fallopian tube cancer, or primary peritoneal cancer between 2003 and 2007. We collected data on the dates of diagnosis, recurrence, and death; cancer stage and grade, patients' age, surgery performed, and residual disease. One hundred ninety-two patients were included in the final analysis. The optimal debulking rate with immediate surgery was 64.8%, and with interval surgery it was 85.9%. There were improved overall and disease-free survival rates for optimally debulked disease (< 1 cm) with both immediate and interval surgery (P < 0.001) compared to suboptimally debulked disease. Overall survival rates for optimally debulked disease were not significantly different in patients having immediate and interval surgery (P = 0.25). In the immediate surgery group, patients with microscopic residual disease had better disease-free survival (P = 0.015) and overall survival (P = 0.005) than patients with < 1 cm residual disease. In patients who had interval surgery, those who had microscopic residual disease had more improved disease-free survival than those with < 1 cm disease (P = 0.05), but they did not have more improved overall survival (P = 0.42). Patients with microscopic residual disease who had immediate surgery had a significantly better overall survival rate than those who had interval surgery (P = 0.034). In women with advanced stage ovarian cancer, the goal of surgery should be resection of disease to microscopic residual at the initial procedure. This results in improved overall survival than lesser degrees of resection. Further studies are required to determine optimal surgical management.
A Class of Prediction-Correction Methods for Time-Varying Convex Optimization
NASA Astrophysics Data System (ADS)
Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro
2016-09-01
This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.
NASA Astrophysics Data System (ADS)
Farano, Mirko; Cherubini, Stefania; Robinet, Jean-Christophe; De Palma, Pietro
2016-12-01
Subcritical transition in plane Poiseuille flow is investigated by means of a Lagrange-multiplier direct-adjoint optimization procedure with the aim of finding localized three-dimensional perturbations optimally growing in a given time interval (target time). Space localization of these optimal perturbations (OPs) is achieved by choosing as objective function either a p-norm (with p\\gg 1) of the perturbation energy density in a linear framework; or the classical (1-norm) perturbation energy, including nonlinear effects. This work aims at analyzing the structure of linear and nonlinear localized OPs for Poiseuille flow, and comparing their transition thresholds and scenarios. The nonlinear optimization approach provides three types of solutions: a weakly nonlinear, a hairpin-like and a highly nonlinear optimal perturbation, depending on the value of the initial energy and the target time. The former shows localization only in the wall-normal direction, whereas the latter appears much more localized and breaks the spanwise symmetry found at lower target times. Both solutions show spanwise inclined vortices and large values of the streamwise component of velocity already at the initial time. On the other hand, p-norm optimal perturbations, although being strongly localized in space, keep a shape similar to linear 1-norm optimal perturbations, showing streamwise-aligned vortices characterized by low values of the streamwise velocity component. When used for initializing direct numerical simulations, in most of the cases nonlinear OPs provide the most efficient route to transition in terms of time to transition and initial energy, even when they are less localized in space than the p-norm OP. The p-norm OP follows a transition path similar to the oblique transition scenario, with slightly oscillating streaks which saturate and eventually experience secondary instability. On the other hand, the nonlinear OP rapidly forms large-amplitude bent streaks and skips the phases of streak saturation, providing a contemporary growth of all of the velocity components due to strong nonlinear coupling.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Optimal Measurement Interval for Emergency Department Crowding Estimation Tools.
Wang, Hao; Ojha, Rohit P; Robinson, Richard D; Jackson, Bradford E; Shaikh, Sajid A; Cowden, Chad D; Shyamanand, Rath; Leuck, JoAnna; Schrader, Chet D; Zenarosa, Nestor R
2017-11-01
Emergency department (ED) crowding is a barrier to timely care. Several crowding estimation tools have been developed to facilitate early identification of and intervention for crowding. Nevertheless, the ideal frequency is unclear for measuring ED crowding by using these tools. Short intervals may be resource intensive, whereas long ones may not be suitable for early identification. Therefore, we aim to assess whether outcomes vary by measurement interval for 4 crowding estimation tools. Our eligible population included all patients between July 1, 2015, and June 30, 2016, who were admitted to the JPS Health Network ED, which serves an urban population. We generated 1-, 2-, 3-, and 4-hour ED crowding scores for each patient, using 4 crowding estimation tools (National Emergency Department Overcrowding Scale [NEDOCS], Severely Overcrowded, Overcrowded, and Not Overcrowded Estimation Tool [SONET], Emergency Department Work Index [EDWIN], and ED Occupancy Rate). Our outcomes of interest included ED length of stay (minutes) and left without being seen or eloped within 4 hours. We used accelerated failure time models to estimate interval-specific time ratios and corresponding 95% confidence limits for length of stay, in which the 1-hour interval was the reference. In addition, we used binomial regression with a log link to estimate risk ratios (RRs) and corresponding confidence limit for left without being seen. Our study population comprised 117,442 patients. The time ratios for length of stay were similar across intervals for each crowding estimation tool (time ratio=1.37 to 1.30 for NEDOCS, 1.44 to 1.37 for SONET, 1.32 to 1.27 for EDWIN, and 1.28 to 1.23 for ED Occupancy Rate). The RRs of left without being seen differences were also similar across intervals for each tool (RR=2.92 to 2.56 for NEDOCS, 3.61 to 3.36 for SONET, 2.65 to 2.40 for EDWIN, and 2.44 to 2.14 for ED Occupancy Rate). Our findings suggest limited variation in length of stay or left without being seen between intervals (1 to 4 hours) regardless of which of the 4 crowding estimation tools were used. Consequently, 4 hours may be a reasonable interval for assessing crowding with these tools, which could substantially reduce the burden on ED personnel by requiring less frequent assessment of crowding. Copyright © 2017 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.
Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M
2013-01-01
Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Blood samples were collected in the supine position from 1226 subjects, aged 5-84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results.
Eisenhofer, Graeme; Lattke, Peter; Herberg, Maria; Siegert, Gabriele; Qin, Nan; Därr, Roland; Hoyer, Jana; Villringer, Arno; Prejbisz, Aleksander; Januszewicz, Andrzej; Remaley, Alan; Martucci, Victoria; Pacak, Karel; Ross, H Alec; Sweep, Fred C G J; Lenders, Jacques W M
2016-01-01
Background Measurements of plasma normetanephrine and metanephrine provide a useful diagnostic test for phaeochromocytoma, but this depends on appropriate reference intervals. Upper cut-offs set too high compromise diagnostic sensitivity, whereas set too low, false-positives are a problem. This study aimed to establish optimal reference intervals for plasma normetanephrine and metanephrine. Methods Blood samples were collected in the supine position from 1226 subjects, aged 5–84 y, including 116 children, 575 normotensive and hypertensive adults and 535 patients in whom phaeochromocytoma was ruled out. Reference intervals were examined according to age and gender. Various models were examined to optimize upper cut-offs according to estimates of diagnostic sensitivity and specificity in a separate validation group of 3888 patients tested for phaeochromocytoma, including 558 with confirmed disease. Results Plasma metanephrine, but not normetanephrine, was higher (P < 0.001) in men than in women, but reference intervals did not differ. Age showed a positive relationship (P < 0.0001) with plasma normetanephrine and a weaker relationship (P = 0.021) with metanephrine. Upper cut-offs of reference intervals for normetanephrine increased from 0.47 nmol/L in children to 1.05 nmol/L in subjects over 60 y. A curvilinear model for age-adjusted compared with fixed upper cut-offs for normetanephrine, together with a higher cut-off for metanephrine (0.45 versus 0.32 nmol/L), resulted in a substantial gain in diagnostic specificity from 88.3% to 96.0% with minimal loss in diagnostic sensitivity from 93.9% to 93.6%. Conclusions These data establish age-adjusted cut-offs of reference intervals for plasma normetanephrine and optimized cut-offs for metanephrine useful for minimizing false-positive results. PMID:23065528
NASA Astrophysics Data System (ADS)
Wang, Shengling; Cui, Yong; Koodli, Rajeev; Hou, Yibin; Huang, Zhangqin
Due to the dynamics of topology and resources, Call Admission Control (CAC) plays a significant role for increasing resource utilization ratio and guaranteeing users' QoS requirements in wireless/mobile networks. In this paper, a dynamic multi-threshold CAC scheme is proposed to serve multi-class service in a wireless/mobile network. The thresholds are renewed at the beginning of each time interval to react to the changing mobility rate and network load. To find suitable thresholds, a reward-penalty model is designed, which provides different priorities between different service classes and call types through different reward/penalty policies according to network load and average call arrival rate. To speed up the running time of CAC, an Optimized Genetic Algorithm (OGA) is presented, whose components such as encoding, population initialization, fitness function and mutation etc., are all optimized in terms of the traits of the CAC problem. The simulation demonstrates that the proposed CAC scheme outperforms the similar schemes, which means the optimization is realized. Finally, the simulation shows the efficiency of OGA.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
NASA Astrophysics Data System (ADS)
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
NASA Astrophysics Data System (ADS)
Suo, M. Q.; Li, Y. P.; Huang, G. H.
2011-09-01
In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.
Santos, Thays Brenner; Kramer-Soares, Juliana Carlota; Favaro, Vanessa Manchim; Oliveira, Maria Gabriela Menezes
2017-10-01
Time plays an important role in conditioning, it is not only possible to associate stimuli with events that overlap, as in delay fear conditioning, but it is also possible to associate stimuli that are discontinuous in time, as shown in trace conditioning for a discrete stimuli. The environment itself can be a powerful conditioned stimulus (CS) and be associated to unconditioned stimulus (US). Thus, the aim of the present study was to determine the parameters in which contextual fear conditioning occurs by the maintenance of a contextual representation over short and long time intervals. The results showed that a contextual representation can be maintained and associated after 5s, even in the absence of a 15s re-exposure to the training context before US delivery. The same effect was not observed with a 24h interval of discontinuity. Furthermore, optimal conditioned response with a 5s interval is produced only when the contexts (of pre-exposure and shock) match. As the pre-limbic cortex (PL) is necessary for the maintenance of a continuous representation of a stimulus, the involvement of the PL in this temporal and contextual processing was investigated. The reversible inactivation of the PL by muscimol infusion impaired the acquisition of contextual fear conditioning with a 5s interval, but not with a 24h interval, and did not impair delay fear conditioning. The data provided evidence that short and long intervals of discontinuity have different mechanisms, thus contributing to a better understanding of PL involvement in contextual fear conditioning and providing a model that considers both temporal and contextual factors in fear conditioning. Copyright © 2017 Elsevier Inc. All rights reserved.
Relative-Error-Covariance Algorithms
NASA Technical Reports Server (NTRS)
Bierman, Gerald J.; Wolff, Peter J.
1991-01-01
Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.
Continuous-time adaptive critics.
Hanselmann, Thomas; Noakes, Lyle; Zaknich, Anthony
2007-05-01
A continuous-time formulation of an adaptive critic design (ACD) is investigated. Connections to the discrete case are made, where backpropagation through time (BPTT) and real-time recurrent learning (RTRL) are prevalent. Practical benefits are that this framework fits in well with plant descriptions given by differential equations and that any standard integration routine with adaptive step-size does an adaptive sampling for free. A second-order actor adaptation using Newton's method is established for fast actor convergence for a general plant and critic. Also, a fast critic update for concurrent actor-critic training is introduced to immediately apply necessary adjustments of critic parameters induced by actor updates to keep the Bellman optimality correct to first-order approximation after actor changes. Thus, critic and actor updates may be performed at the same time until some substantial error build up in the Bellman optimality or temporal difference equation, when a traditional critic training needs to be performed and then another interval of concurrent actor-critic training may resume.
Vatankhah, Hamed; Zamindar, Nafiseh; Shahedi Baghekhandan, Mohammad
2015-10-01
A mixed computational strategy was used to simulate and optimize the thermal processing of Haleem, an ancient eastern food, in semi-rigid aluminum containers. Average temperature values of the experiments showed no significant difference (α = 0.05) in contrast to the predicted temperatures at the same positions. According to the model, the slowest heating zone was located in geometrical center of the container. The container geometrical center F0 was estimated to be 23.8 min. A 19 min processing time interval decrease in holding time of the treatment was estimated to optimize the heating operation since the preferred F0 of some starch or meat based fluid foods is about 4.8-7.5 min.
NASA Technical Reports Server (NTRS)
Ito, Kazufumi
1987-01-01
The linear quadratic optimal control problem on infinite time interval for linear time-invariant systems defined on Hilbert spaces is considered. The optimal control is given by a feedback form in terms of solution pi to the associated algebraic Riccati equation (ARE). A Ritz type approximation is used to obtain a sequence pi sup N of finite dimensional approximations of the solution to ARE. A sufficient condition that shows pi sup N converges strongly to pi is obtained. Under this condition, a formula is derived which can be used to obtain a rate of convergence of pi sup N to pi. The results of the Galerkin approximation is demonstrated and applied for parabolic systems and the averaging approximation for hereditary differential systems.
Optimizing structure of complex technical system by heterogeneous vector criterion in interval form
NASA Astrophysics Data System (ADS)
Lysenko, A. V.; Kochegarov, I. I.; Yurkov, N. K.; Grishko, A. K.
2018-05-01
The article examines the methods of development and multi-criteria choice of the preferred structural variant of the complex technical system at the early stages of its life cycle in the absence of sufficient knowledge of parameters and variables for optimizing this structure. The suggested methods takes into consideration the various fuzzy input data connected with the heterogeneous quality criteria of the designed system and the parameters set by their variation range. The suggested approach is based on the complex use of methods of interval analysis, fuzzy sets theory, and the decision-making theory. As a result, the method for normalizing heterogeneous quality criteria has been developed on the basis of establishing preference relations in the interval form. The method of building preferential relations in the interval form on the basis of the vector of heterogeneous quality criteria suggest the use of membership functions instead of the coefficients considering the criteria value. The former show the degree of proximity of the realization of the designed system to the efficient or Pareto optimal variants. The study analyzes the example of choosing the optimal variant for the complex system using heterogeneous quality criteria.
A Hybrid Interval-Robust Optimization Model for Water Quality Management.
Xu, Jieyu; Li, Yongping; Huang, Guohe
2013-05-01
In water quality management problems, uncertainties may exist in many system components and pollution-related processes ( i.e. , random nature of hydrodynamic conditions, variability in physicochemical processes, dynamic interactions between pollutant loading and receiving water bodies, and indeterminacy of available water and treated wastewater). These complexities lead to difficulties in formulating and solving the resulting nonlinear optimization problems. In this study, a hybrid interval-robust optimization (HIRO) method was developed through coupling stochastic robust optimization and interval linear programming. HIRO can effectively reflect the complex system features under uncertainty, where implications of water quality/quantity restrictions for achieving regional economic development objectives are studied. By delimiting the uncertain decision space through dimensional enlargement of the original chemical oxygen demand (COD) discharge constraints, HIRO enhances the robustness of the optimization processes and resulting solutions. This method was applied to planning of industry development in association with river-water pollution concern in New Binhai District of Tianjin, China. Results demonstrated that the proposed optimization model can effectively communicate uncertainties into the optimization process and generate a spectrum of potential inexact solutions supporting local decision makers in managing benefit-effective water quality management schemes. HIRO is helpful for analysis of policy scenarios related to different levels of economic penalties, while also providing insight into the tradeoff between system benefits and environmental requirements.
Perez, Claudio A; Cohn, Theodore E; Medina, Leonel E; Donoso, José R
2007-08-31
Stochastic resonance (SR) is the counterintuitive phenomenon in which noise enhances detection of sub-threshold stimuli. The SR psychophysical threshold theory establishes that the required amplitude to exceed the sensory threshold barrier can be reached by adding noise to a sub-threshold stimulus. The aim of this study was to test the SR theory by comparing detection results from two different randomly-presented stimulus conditions. In the first condition, optimal noise was present during the whole attention interval; in the second, the optimal noise was restricted to the same time interval as the stimulus. SR threshold theory predicts no difference between the two conditions because noise helps the sub-threshold stimulus to reach threshold in both cases. The psychophysical experimental method used a 300 ms rectangular force pulse as a stimulus within an attention interval of 1.5 s, applied to the index finger of six human subjects in the two distinct conditions. For all subjects we show that in the condition in which the noise was present only when synchronized with the stimulus, detection was better (p<0.05) than in the condition in which the noise was delivered throughout the attention interval. These results provide the first direct evidence that SR threshold theory is incomplete and that a new phenomenon has been identified, which we call Coincidence-Enhanced Stochastic Resonance (CESR). We propose that CESR might occur because subject uncertainty is reduced when noise points at the same temporal window as the stimulus.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal.
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D; Hubbi, Basil; Liu, Xuan
2015-11-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue.
Robust motion tracking based on adaptive speckle decorrelation analysis of OCT signal
Wang, Yuewen; Wang, Yahui; Akansu, Ali; Belfield, Kevin D.; Hubbi, Basil; Liu, Xuan
2015-01-01
Speckle decorrelation analysis of optical coherence tomography (OCT) signal has been used in motion tracking. In our previous study, we demonstrated that cross-correlation coefficient (XCC) between Ascans had an explicit functional dependency on the magnitude of lateral displacement (δx). In this study, we evaluated the sensitivity of speckle motion tracking using the derivative of function XCC(δx) on variable δx. We demonstrated the magnitude of the derivative can be maximized. In other words, the sensitivity of OCT speckle tracking can be optimized by using signals with appropriate amount of decorrelation for XCC calculation. Based on this finding, we developed an adaptive speckle decorrelation analysis strategy to achieve motion tracking with optimized sensitivity. Briefly, we used subsequently acquired Ascans and Ascans obtained with larger time intervals to obtain multiple values of XCC and chose the XCC value that maximized motion tracking sensitivity for displacement calculation. Instantaneous motion speed can be calculated by dividing the obtained displacement with time interval between Ascans involved in XCC calculation. We implemented the above-described algorithm in real-time using graphic processing unit (GPU) and demonstrated its effectiveness in reconstructing distortion-free OCT images using data obtained from a manually scanned OCT probe. The adaptive speckle tracking method was validated in manually scanned OCT imaging, on phantom as well as in vivo skin tissue. PMID:26600996
Human memory retrieval as Lévy foraging
NASA Astrophysics Data System (ADS)
Rhodes, Theo; Turvey, Michael T.
2007-11-01
When people attempt to recall as many words as possible from a specific category (e.g., animal names) their retrievals occur sporadically over an extended temporal period. Retrievals decline as recall progresses, but short retrieval bursts can occur even after tens of minutes of performing the task. To date, efforts to gain insight into the nature of retrieval from this fundamental phenomenon of semantic memory have focused primarily upon the exponential growth rate of cumulative recall. Here we focus upon the time intervals between retrievals. We expected and found that, for each participant in our experiment, these intervals conformed to a Lévy distribution suggesting that the Lévy flight dynamics that characterize foraging behavior may also characterize retrieval from semantic memory. The closer the exponent on the inverse square power-law distribution of retrieval intervals approximated the optimal foraging value of 2, the more efficient was the retrieval. At an abstract dynamical level, foraging for particular foods in one's niche and searching for particular words in one's memory must be similar processes if particular foods and particular words are randomly and sparsely located in their respective spaces at sites that are not known a priori. We discuss whether Lévy dynamics imply that memory processes, like foraging, are optimized in an ecological way.
Sasaki, Satoshi; Comber, Alexis J; Suzuki, Hiroshi; Brunsdon, Chris
2010-01-28
Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.
NASA Astrophysics Data System (ADS)
Hou, Liqiang; Cai, Yuanli; Liu, Jin; Hou, Chongyuan
2016-04-01
A variable fidelity robust optimization method for pulsed laser orbital debris removal (LODR) under uncertainty is proposed. Dempster-shafer theory of evidence (DST), which merges interval-based and probabilistic uncertainty modeling, is used in the robust optimization. The robust optimization method optimizes the performance while at the same time maximizing its belief value. A population based multi-objective optimization (MOO) algorithm based on a steepest descent like strategy with proper orthogonal decomposition (POD) is used to search robust Pareto solutions. Analytical and numerical lifetime predictors are used to evaluate the debris lifetime after the laser pulses. Trust region based fidelity management is designed to reduce the computational cost caused by the expensive model. When the solutions fall into the trust region, the analytical model is used to reduce the computational cost. The proposed robust optimization method is first tested on a set of standard problems and then applied to the removal of Iridium 33 with pulsed lasers. It will be shown that the proposed approach can identify the most robust solutions with minimum lifetime under uncertainty.
Determining optimal parameters in magnetic spacecraft stabilization via attitude feedback
NASA Astrophysics Data System (ADS)
Bruni, Renato; Celani, Fabio
2016-10-01
The attitude control of a spacecraft using magnetorquers can be achieved by a feedback control law which has four design parameters. However, the practical determination of appropriate values for these parameters is a critical open issue. We propose here an innovative systematic approach for finding these values: they should be those that minimize the convergence time to the desired attitude. This a particularly diffcult optimization problem, for several reasons: 1) such time cannot be expressed in analytical form as a function of parameters and initial conditions; 2) design parameters may range over very wide intervals; 3) convergence time depends also on the initial conditions of the spacecraft, which are not known in advance. To overcome these diffculties, we present a solution approach based on derivative-free optimization. These algorithms do not need to write analytically the objective function: they only need to compute it in a number of points. We also propose a fast probing technique to identify which regions of the search space have to be explored densely. Finally, we formulate a min-max model to find robust parameters, namely design parameters that minimize convergence time under the worst initial conditions. Results are very promising.
Hackenberg, T D; Hineline, P N
1992-01-01
Pigeons chose between two schedules of food presentation, a fixed-interval schedule and a progressive-interval schedule that began at 0 s and increased by 20 s with each food delivery provided by that schedule. Choosing one schedule disabled the alternate schedule and stimuli until the requirements of the chosen schedule were satisfied, at which point both schedules were again made available. Fixed-interval duration remained constant within individual sessions but varied across conditions. Under reset conditions, completing the fixed-interval schedule not only produced food but also reset the progressive interval to its minimum. Blocks of sessions under the reset procedure were interspersed with sessions under a no-reset procedure, in which the progressive schedule value increased independent of fixed-interval choices. Median points of switching from the progressive to the fixed schedule varied systematically with fixed-interval value, and were consistently lower during reset than during no-reset conditions. Under the latter, each subject's choices of the progressive-interval schedule persisted beyond the point at which its requirements equaled those of the fixed-interval schedule at all but the highest fixed-interval value. Under the reset procedure, switching occurred at or prior to that equality point. These results qualitatively confirm molar analyses of schedule preference and some versions of optimality theory, but they are more adequately characterized by a model of schedule preference based on the cumulated values of multiple reinforcers, weighted in inverse proportion to the delay between the choice and each successive reinforcer. PMID:1548449
Naito, Takashi; Iribe, Yuka; Ogaki, Tetsuro
2017-01-05
The timing in which ice before exercise should be ingested plays an important role in optimizing its success. However, the effects of differences in the timing of ice ingestion before exercise on cycling capacity, and thermoregulation has not been studied. The aim of the present study was to assess the effect of length of time after ice ingestion on endurance exercise capacity in the heat. Seven males ingested 1.25 g kg body mass -1 of ice (0.5 °C) or cold water (4 °C) every 5 min, six times. Under three separate conditions after ice or water ingestion ([1] taking 20 min rest after ice ingestion, [2] taking 5 min rest after ice ingestion, and [3] taking 5 min rest after cold water ingestion), seven physically active male cyclists exercised at 65% of their maximal oxygen uptake to exhaustion in the heat (35 °C, 30% relative humidity). Participants cycled significantly longer following both ice ingestion with a long rest interval (46.0 ± 7.7 min) and that with a short rest interval (38.7 ± 5.7 min) than cold water ingestion (32.3 ± 3.2 min; both p < 0.05), and the time to exhaustion was 16% (p < 0.05) longer for ice ingestion with a long rest interval than that with a short rest interval. Ice ingestion with a long rest interval (-0.55 ± 0.07 °C; both p < 0.05) allowed for a greater drop in the core temperature than both ice ingestion with a short rest interval (-0.36 ± 0.16 °C) and cold water ingestion (-0.11 ± 0.14 °C). Heat storage under condition of ice ingestion with a long rest interval during the pre-exercise period was significantly lower than that observed with a short rest interval (-4.98 ± 2.50 W m -2 ; p < 0.05) and cold water ingestion (2.86 ± 4.44 W m -2 ). Therefore, internal pre-cooling by ice ingestion with a long rest interval had the greatest benefit on exercise capacity in the heat, which is suggested to be driven by a reduced rectal temperature and heat storage before the start of exercise.
Ren, Jingzheng; Dong, Liang; Sun, Lu; Goodsite, Michael Evan; Tan, Shiyu; Dong, Lichun
2015-01-01
The aim of this work was to develop a model for optimizing the life cycle cost of biofuel supply chain under uncertainties. Multiple agriculture zones, multiple transportation modes for the transport of grain and biofuel, multiple biofuel plants, and multiple market centers were considered in this model, and the price of the resources, the yield of grain and the market demands were regarded as interval numbers instead of constants. An interval linear programming was developed, and a method for solving interval linear programming was presented. An illustrative case was studied by the proposed model, and the results showed that the proposed model is feasible for designing biofuel supply chain under uncertainties. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chao, W. C.
1982-01-01
With appropriate modifications, a recently proposed explicit-multiple-time-step scheme (EMTSS) is incorporated into the UCLA model. In this scheme, the linearized terms in the governing equations that generate the gravity waves are split into different vertical modes. Each mode is integrated with an optimal time step, and at periodic intervals these modes are recombined. The other terms are integrated with a time step dictated by the CFL condition for low-frequency waves. This large time step requires a special modification of the advective terms in the polar region to maintain stability. Test runs for 72 h show that EMTSS is a stable, efficient and accurate scheme.
Sustainable Confined Disposal Facilities for Long-term Management of Dredged Material
2010-07-01
need to resort to a full-blown risk assessment . Ideally, a set of look-up criteria could be developed for beneficial use applications where no direct...EPA-905-R-99-006. Assessment and Remediation of Contaminated Sediments Program. Chicago, IL: Great Lakes National Program Office. Olin-Estes, T. J...maintenance dredging Hydrodynamic modeling to assess benefits, adjust cuts, and optimize dredging time intervals Dredge more efficiently Silent
Non-Markovianity of Gaussian Channels.
Torre, G; Roga, W; Illuminati, F
2015-08-14
We introduce a necessary and sufficient criterion for the non-Markovianity of Gaussian quantum dynamical maps based on the violation of divisibility. The criterion is derived by defining a general vectorial representation of the covariance matrix which is then exploited to determine the condition for the complete positivity of partial maps associated with arbitrary time intervals. Such construction does not rely on the Choi-Jamiolkowski representation and does not require optimization over states.
Hailu, Desta; Gulte, Teklemariam
2016-01-01
Background. One of the key strategies to reduce fertility and promote the health status of mothers and their children is adhering to optimal birth spacing. However, women still have shorter birth intervals and studies addressing their determinants were scarce. The objective of this study, therefore, was to assess determinants of birth interval among women who had at least two consecutive live births. Methods. Case control study was conducted from February to April 2014. Cases were women with short birth intervals (<3 years), whereas controls were women having history of optimal birth intervals (3 to 5 years). Bivariate and multivariable analyses were performed. Result. Having no formal education (AOR = 2.36, 95% CL: [1.23–4.52]), duration of breast feeding for less than 24 months (AOR: 66.03, 95% CI; [34.60–126]), preceding child being female (AOR: 5.73, 95% CI; [3.18–10.310]), modern contraceptive use (AOR: 2.79, 95% CI: [1.58–4.940]), and poor wealth index (AOR: 4.89, 95% CI; [1.81–13.25]) of respondents were independent predictors of short birth interval. Conclusion. In equalities in education, duration of breast feeding, sex of the preceding child, contraceptive method use, and wealth index were markers of unequal distribution of inter birth intervals. Thus, to optimize birth spacing, strategies of providing information, education and communication targeting predictor variables should be improved. PMID:27239553
High-intensity interval exercise and cerebrovascular health: curiosity, cause, and consequence
Lucas, Samuel J E; Cotter, James D; Brassard, Patrice; Bailey, Damian M
2015-01-01
Exercise is a uniquely effective and pluripotent medicine against several noncommunicable diseases of westernised lifestyles, including protection against neurodegenerative disorders. High-intensity interval exercise training (HIT) is emerging as an effective alternative to current health-related exercise guidelines. Compared with traditional moderate-intensity continuous exercise training, HIT confers equivalent if not indeed superior metabolic, cardiac, and systemic vascular adaptation. Consequently, HIT is being promoted as a more time-efficient and practical approach to optimize health thereby reducing the burden of disease associated with physical inactivity. However, no studies to date have examined the impact of HIT on the cerebrovasculature and corresponding implications for cognitive function. This review critiques the implications of HIT for cerebrovascular function, with a focus on the mechanisms and translational impact for patient health and well-being. It also introduces similarly novel interventions currently under investigation as alternative means of accelerating exercise-induced cerebrovascular adaptation. We highlight a need for studies of the mechanisms and thereby also the optimal dose-response strategies to guide exercise prescription, and for studies to explore alternative approaches to optimize exercise outcomes in brain-related health and disease prevention. From a clinical perspective, interventions that selectively target the aging brain have the potential to prevent stroke and associated neurovascular diseases. PMID:25833341
High-intensity interval exercise and cerebrovascular health: curiosity, cause, and consequence.
Lucas, Samuel J E; Cotter, James D; Brassard, Patrice; Bailey, Damian M
2015-06-01
Exercise is a uniquely effective and pluripotent medicine against several noncommunicable diseases of westernised lifestyles, including protection against neurodegenerative disorders. High-intensity interval exercise training (HIT) is emerging as an effective alternative to current health-related exercise guidelines. Compared with traditional moderate-intensity continuous exercise training, HIT confers equivalent if not indeed superior metabolic, cardiac, and systemic vascular adaptation. Consequently, HIT is being promoted as a more time-efficient and practical approach to optimize health thereby reducing the burden of disease associated with physical inactivity. However, no studies to date have examined the impact of HIT on the cerebrovasculature and corresponding implications for cognitive function. This review critiques the implications of HIT for cerebrovascular function, with a focus on the mechanisms and translational impact for patient health and well-being. It also introduces similarly novel interventions currently under investigation as alternative means of accelerating exercise-induced cerebrovascular adaptation. We highlight a need for studies of the mechanisms and thereby also the optimal dose-response strategies to guide exercise prescription, and for studies to explore alternative approaches to optimize exercise outcomes in brain-related health and disease prevention. From a clinical perspective, interventions that selectively target the aging brain have the potential to prevent stroke and associated neurovascular diseases.
[Neuroimaging follow-up of cerebral aneurysms treated with endovascular techniques].
Delgado, F; Saiz, A; Hilario, A; Murias, E; San Román Manzanera, L; Lagares Gomez-Abascal, A; Gabarrós, A; González García, A
2014-01-01
There are no specific recommendations in clinical guidelines about the best time, imaging tests, or intervals for following up patients with intracranial aneurysms treated with endovascular techniques. We reviewed the literature, using the following keywords to search in the main medical databases: cerebral aneurysm, coils, endovascular procedure, and follow-up. Within the Cerebrovascular Disease Group of the Spanish Society of Neuroradiology, we aimed to propose recommendations and an orientative protocol based on the scientific evidence for using neuroimaging to monitor intracranial aneurysms that have been treated with endovascular techniques. We aimed to specify the most appropriate neuroimaging techniques, the interval, the time of follow-up, and the best approach to defining the imaging findings, with the ultimate goal of improving clinical outcomes while optimizing and rationalizing the use of available resources. Copyright © 2013 SERAM. Published by Elsevier Espana. All rights reserved.
Time perception, pacing and exercise intensity: maximal exercise distorts the perception of time.
Edwards, A M; McCormick, A
2017-10-15
Currently there are no data examining the impact of exercise on the perception of time, which is surprising as optimal competitive performance is dependent on accurate pacing using knowledge of time elapsed. With institutional ethics approval, 12 recreationally active adult participants (f=7, m=5) undertook both 30s Wingate cycles and 20min (1200s) rowing ergometer bouts as short and long duration self-paced exercise trials, in each of three conditions on separate occasions: 1) light exertion: RPE 11, 2) heavy exertion: RPE 15, 3) maximal exertion: RPE 20. Participants were unaware of exercise duration and were required to verbally indicate when they perceived (subjective time) 1) 25%, 2) 50%, 3) 75% and 4) 100% of each bout's measured (chronological) time had elapsed. In response to the Wingate task, there was no difference between durations of subjective time at the 25%, nor at the 50% interval. However, at the 75% and 100% intervals, the estimate for the RPE 20 condition was shortest (P<0.01). In response to rowing, there were no differences at the 25% interval, but there was some evidence that the RPE 20 condition was perceived shorter at 50%. At 75% and 100%, the RPE 20 condition was perceived to be shorter than both RPE 15 (P=0.04) and RPE 11 (P=0.008) conditions. This study is the first to empirically demonstrate that exercise intensity distorts time perception, particularly during maximal exercise. Consequently external feedback of chronological time may be an important factor for athletes undertaking maximal effort tasks or competitions. Copyright © 2017 Elsevier Inc. All rights reserved.
A good time to leave?: the sunk time effect in pigeons.
Magalhães, Paula; White, K Geoffrey
2014-06-01
Persistence in a losing course of action due to prior investments of time, known as the sunk time effect, has seldom been studied in nonhuman animals. On every trial in the present study, pigeons were required to choose between two response keys. Responses on one key produced food after a short fixed interval (FI) of time on some trials, or on other trials, no food (Extinction) after a longer time. FI and Extinction trials were not differently signaled, were equiprobable, and alternated randomly. Responses on a second Escape key allowed the pigeon to terminate the current trial and start a new one. The optimal behavior was for pigeons to peck the escape key once the duration equivalent to the short FI had elapsed without reward. Durations of the short FI and the longer Extinction schedules were varied over conditions. In some conditions, the pigeons suboptimally responded through the Extinction interval, thus committing the sunk time effect. The absolute duration of the short FI had no effect on the choice between persisting and escaping. Instead, the ratio of FI and Extinction durations determined the likelihood of persistence during extinction. Copyright © 2014 Elsevier B.V. All rights reserved.
Zhang, Xiaoling; Huang, Kai; Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of "low risk and high return efficiency" in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management.
Zou, Rui; Liu, Yong; Yu, Yajuan
2013-01-01
The conflict of water environment protection and economic development has brought severe water pollution and restricted the sustainable development in the watershed. A risk explicit interval linear programming (REILP) method was used to solve integrated watershed environmental-economic optimization problem. Interval linear programming (ILP) and REILP models for uncertainty-based environmental economic optimization at the watershed scale were developed for the management of Lake Fuxian watershed, China. Scenario analysis was introduced into model solution process to ensure the practicality and operability of optimization schemes. Decision makers' preferences for risk levels can be expressed through inputting different discrete aspiration level values into the REILP model in three periods under two scenarios. Through balancing the optimal system returns and corresponding system risks, decision makers can develop an efficient industrial restructuring scheme based directly on the window of “low risk and high return efficiency” in the trade-off curve. The representative schemes at the turning points of two scenarios were interpreted and compared to identify a preferable planning alternative, which has the relatively low risks and nearly maximum benefits. This study provides new insights and proposes a tool, which was REILP, for decision makers to develop an effectively environmental economic optimization scheme in integrated watershed management. PMID:24191144
Modeling of the static recrystallization for 7055 aluminum alloy by cellular automaton
NASA Astrophysics Data System (ADS)
Zhang, Tao; Lu, Shi-hong; Zhang, Jia-bin; Li, Zheng-fang; Chen, Peng; Gong, Hai; Wu, Yun-xin
2017-09-01
In order to simulate the flow behavior and microstructure evolution during the pass interval period of the multi-pass deformation process, models of static recovery (SR) and static recrystallization (SRX) by the cellular automaton (CA) method for the 7055 aluminum alloy were established. Double-pass hot compression tests were conducted to acquire flow stress and microstructure variation during the pass interval period. With the basis of the material constants obtained from the compression tests, models of the SR, incubation period, nucleation rate and grain growth were fitted by least square method. A model of the grain topology and a statistical computation of the CA results were also introduced. The effects of the pass interval time, temperature, strain, strain rate and initial grain size on the microstructure variation for the SRX of the 7055 aluminum alloy were studied. The results show that a long pass interval time, large strain, high temperature and large strain rate are beneficial for finer grains during the pass interval period. The stable size of the static recrystallized grain is not concerned with the initial grain size, but mainly depends on the strain rate and temperature. The SRX plays a vital role in grain refinement, while the SR has no effect on the variation of microstructure morphology. Using flow stress and microstructure comparisons of the simulated and experimental CA results, the established CA models can accurately predict the flow stress and microstructure evolution during the pass interval period, and provide guidance for the selection of optimized parameters for the multi-pass deformation process.
Very Similar Spacing-Effect Patterns in Very Different Learning/Practice Domains
Kornmeier, Jürgen; Spitzer, Manfred; Sosic-Vasic, Zrinka
2014-01-01
Temporally distributed (“spaced”) learning can be twice as efficient as massed learning. This “spacing effect” occurs with a broad spectrum of learning materials, with humans of different ages, with non-human vertebrates and also invertebrates. This indicates, that very basic learning mechanisms are at work (“generality”). Although most studies so far focused on very narrow spacing interval ranges, there is some evidence for a non-monotonic behavior of this “spacing effect” (“nonlinearity”) with optimal spacing intervals at different time scales. In the current study we focused both the nonlinearity aspect by using a broad range of spacing intervals and the generality aspect by using very different learning/practice domains: Participants learned German-Japanese word pairs and performed visual acuity tests. For each of six groups we used a different spacing interval between learning/practice units from 7 min to 24 h in logarithmic steps. Memory retention was studied in three consecutive final tests, one, seven and 28 days after the final learning unit. For both the vocabulary learning and visual acuity performance we found a highly significant effect of the factor spacing interval on the final test performance. In the 12 h-spacing-group about 85% of the learned words stayed in memory and nearly all of the visual acuity gain was preserved. In the 24 h-spacing-group, in contrast, only about 33% of the learned words were retained and the visual acuity gain dropped to zero. The very similar patterns of results from the two very different learning/practice domains point to similar underlying mechanisms. Further, our results indicate spacing in the range of 12 hours as optimal. A second peak may be around a spacing interval of 20 min but here the data are less clear. We discuss relations between our results and basic learning at the neuronal level. PMID:24609081
Fractal analyses reveal independent complexity and predictability of gait
Dierick, Frédéric; Nivard, Anne-Laure
2017-01-01
Locomotion is a natural task that has been assessed for decades and used as a proxy to highlight impairments of various origins. So far, most studies adopted classical linear analyses of spatio-temporal gait parameters. Here, we use more advanced, yet not less practical, non-linear techniques to analyse gait time series of healthy subjects. We aimed at finding more sensitive indexes related to spatio-temporal gait parameters than those previously used, with the hope to better identify abnormal locomotion. We analysed large-scale stride interval time series and mean step width in 34 participants while altering walking direction (forward vs. backward walking) and with or without galvanic vestibular stimulation. The Hurst exponent α and the Minkowski fractal dimension D were computed and interpreted as indexes expressing predictability and complexity of stride interval time series, respectively. These holistic indexes can easily be interpreted in the framework of optimal movement complexity. We show that α and D accurately capture stride interval changes in function of the experimental condition. Walking forward exhibited maximal complexity (D) and hence, adaptability. In contrast, walking backward and/or stimulation of the vestibular system decreased D. Furthermore, walking backward increased predictability (α) through a more stereotyped pattern of the stride interval and galvanic vestibular stimulation reduced predictability. The present study demonstrates the complementary power of the Hurst exponent and the fractal dimension to improve walking classification. Our developments may have immediate applications in rehabilitation, diagnosis, and classification procedures. PMID:29182659
NASA Astrophysics Data System (ADS)
Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn
2015-03-01
Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.
Population-wide folic acid fortification and preterm birth: testing the folate depletion hypothesis.
Naimi, Ashley I; Auger, Nathalie
2015-04-01
We assess whether population-wide folic acid fortification policies were followed by a reduction of preterm and early-term birth rates in Québec among women with short and optimal interpregnancy intervals. We extracted birth certificate data for 1.3 million births between 1981 and 2010 to compute age-adjusted preterm and early-term birth rates stratified by short and optimal interpregnancy intervals. We used Joinpoint regression to detect changes in the preterm and early term birth rates and assess whether these changes coincide with the implementation of population-wide folic acid fortification. A change in the preterm birth rate occurred in 2000 among women with short (95% confidence interval [CI] = 1994, 2005) and optimal (95% CI = 1995, 2008) interpregnancy intervals. Changes in early term birth rates did not coincide with the implementation of folic acid fortification. Our results do not indicate a link between folic acid fortification and early term birth but suggest an improvement in preterm birth rates after implementation of a nationwide folic acid fortification program.
NASA Astrophysics Data System (ADS)
Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž
2016-02-01
We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm.
Reliability-based management of buried pipelines considering external corrosion defects
NASA Astrophysics Data System (ADS)
Miran, Seyedeh Azadeh
Corrosion is one of the main deteriorating mechanisms that degrade the energy pipeline integrity, due to transferring corrosive fluid or gas and interacting with corrosive environment. Corrosion defects are usually detected by periodical inspections using in-line inspection (ILI) methods. In order to ensure pipeline safety, this study develops a cost-effective maintenance strategy that consists of three aspects: corrosion growth model development using ILI data, time-dependent performance evaluation, and optimal inspection interval determination. In particular, the proposed study is applied to a cathodic protected buried steel pipeline located in Mexico. First, time-dependent power-law formulation is adopted to probabilistically characterize growth of the maximum depth and length of the external corrosion defects. Dependency between defect depth and length are considered in the model development and generation of the corrosion defects over time is characterized by the homogenous Poisson process. The growth models unknown parameters are evaluated based on the ILI data through the Bayesian updating method with Markov Chain Monte Carlo (MCMC) simulation technique. The proposed corrosion growth models can be used when either matched or non-matched defects are available, and have ability to consider newly generated defects since last inspection. Results of this part of study show that both depth and length growth models can predict damage quantities reasonably well and a strong correlation between defect depth and length is found. Next, time-dependent system failure probabilities are evaluated using developed corrosion growth models considering prevailing uncertainties where three failure modes, namely small leak, large leak and rupture are considered. Performance of the pipeline is evaluated through failure probability per km (or called a sub-system) where each subsystem is considered as a series system of detected and newly generated defects within that sub-system. Sensitivity analysis is also performed to determine to which incorporated parameter(s) in the growth models reliability of the studied pipeline is most sensitive. The reliability analysis results suggest that newly generated defects should be considered in calculating failure probability, especially for prediction of long-term performance of the pipeline and also, impact of the statistical uncertainty in the model parameters is significant that should be considered in the reliability analysis. Finally, with the evaluated time-dependent failure probabilities, a life cycle-cost analysis is conducted to determine optimal inspection interval of studied pipeline. The expected total life-cycle costs consists construction cost and expected costs of inspections, repair, and failure. The repair is conducted when failure probability from any described failure mode exceeds pre-defined probability threshold after each inspection. Moreover, this study also investigates impact of repair threshold values and unit costs of inspection and failure on the expected total life-cycle cost and optimal inspection interval through a parametric study. The analysis suggests that a smaller inspection interval leads to higher inspection costs, but can lower failure cost and also repair cost is less significant compared to inspection and failure costs.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
RadVel: General toolkit for modeling Radial Velocities
NASA Astrophysics Data System (ADS)
Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan
2018-01-01
RadVel models Keplerian orbits in radial velocity (RV) time series. The code is written in Python with a fast Kepler's equation solver written in C. It provides a framework for fitting RVs using maximum a posteriori optimization and computing robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel can perform Bayesian model comparison and produces publication quality plots and LaTeX tables.
Discrete Methods and their Applications
1993-02-03
problem of finding all near-optimal solutions to a linear program. In paper [18], we give a brief and elementary proof of a result of Hoffman [1952) about...relies only on linear programming duality; second, we obtain geometric and algebraic representations of the bounds that are determined explicitly in...same. We have studied the problem of finding the minimum n such that a given unit interval graph is an n--graph. A linear time algorithm to compute
NASA Astrophysics Data System (ADS)
Kumar, Girish; Jain, Vipul; Gandhi, O. P.
2018-03-01
Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
NASA Astrophysics Data System (ADS)
Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong
2013-09-01
Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.
NASA Astrophysics Data System (ADS)
Paramonov, G. P.; Mysin, A. V.; Babkin, R. S.
2017-10-01
The paper introduces construction of multicharge composition with separation of parts by the profile inert interval. On the basis of the previous researches, the pulse-forming process at explosion of the borehole multicharge taking into account the offered design is considered. The physical model for definition of reflected wavelet taking into account an increment of radius of cross section of a charging cavity and the expiration of detonation products is offered. A technique is developed for numerical modeling of gas-dynamic processes in a borehole with a change in the axial channel of a profile inert interval caused by a high-temperature flow of gaseous products of an explosion. The authors obtained the dependence of the change in mean pressure on the borehole wall on time for each of the parts of the multicharge. To blast a series of charges of the proposed design, taking into account optimization of the stress fields of neighboring charges, the delay interval is determined for a short-delayed explosion.
An Overview of Heart Rate Variability Metrics and Norms
Shaffer, Fred; Ginsberg, J. P.
2017-01-01
Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions. PMID:29034226
Scale Invariance in Lateral Head Scans During Spatial Exploration.
Yadav, Chetan K; Doreswamy, Yoganarasimha
2017-04-14
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
Scale Invariance in Lateral Head Scans During Spatial Exploration
NASA Astrophysics Data System (ADS)
Yadav, Chetan K.; Doreswamy, Yoganarasimha
2017-04-01
Universality connects various natural phenomena through physical principles governing their dynamics, and has provided broadly accepted answers to many complex questions, including information processing in neuronal systems. However, its significance in behavioral systems is still elusive. Lateral head scanning (LHS) behavior in rodents might contribute to spatial navigation by actively managing (optimizing) the available sensory information. Our findings of scale invariant distributions in LHS lifetimes, interevent intervals and event magnitudes, provide evidence for the first time that the optimization takes place at a critical point in LHS dynamics. We propose that the LHS behavior is responsible for preprocessing of the spatial information content, critical for subsequent foolproof encoding by the respective downstream neural networks.
Optimal experimental designs for the estimation of thermal properties of composite materials
NASA Technical Reports Server (NTRS)
Scott, Elaine P.; Moncman, Deborah A.
1994-01-01
Reliable estimation of thermal properties is extremely important in the utilization of new advanced materials, such as composite materials. The accuracy of these estimates can be increased if the experiments are designed carefully. The objectives of this study are to design optimal experiments to be used in the prediction of these thermal properties and to then utilize these designs in the development of an estimation procedure to determine the effective thermal properties (thermal conductivity and volumetric heat capacity). The experiments were optimized by choosing experimental parameters that maximize the temperature derivatives with respect to all of the unknown thermal properties. This procedure has the effect of minimizing the confidence intervals of the resulting thermal property estimates. Both one-dimensional and two-dimensional experimental designs were optimized. A heat flux boundary condition is required in both analyses for the simultaneous estimation of the thermal properties. For the one-dimensional experiment, the parameters optimized were the heating time of the applied heat flux, the temperature sensor location, and the experimental time. In addition to these parameters, the optimal location of the heat flux was also determined for the two-dimensional experiments. Utilizing the optimal one-dimensional experiment, the effective thermal conductivity perpendicular to the fibers and the effective volumetric heat capacity were then estimated for an IM7-Bismaleimide composite material. The estimation procedure used is based on the minimization of a least squares function which incorporates both calculated and measured temperatures and allows for the parameters to be estimated simultaneously.
Gezer, Cenk; Ekin, Atalay; Golbasi, Ceren; Kocahakimoglu, Ceysu; Bozkurt, Umit; Dogan, Askin; Solmaz, Ulaş; Golbasi, Hakan; Taner, Cuneyt Eftal
2017-04-01
To determine whether urea and creatinine measurements in vaginal fluid could be used to diagnose preterm premature rupture of membranes (PPROM) and predict delivery interval after PPROM. A prospective study conducted with 100 pregnant women with PPROM and 100 healthy pregnant women between 24 + 0 and 36 + 6 gestational weeks. All patients underwent sampling for urea and creatinine concentrations in vaginal fluid at the time of admission. Receiver operator curve analysis was used to determine the cutoff values for the presence of PPROM and delivery within 48 h after PPROM. In multivariate logistic regression analysis, vaginal fluid urea and creatinine levels were found to be significant predictors of PPROM (p < 0.001 and p < 0.001, respectively) and delivery within 48 h after PPROM (p = 0.012 and p = 0.017, respectively). The optimal cutoff values for the diagnosis of PPROM were >6.7 mg/dl for urea and >0.12 mg/dl for creatinine. The optimal cutoff values for the detection of delivery within 48 h were >19.4 mg/dl for urea and >0.23 mg/dl for creatinine. Measurement of urea and creatinine levels in vaginal fluid is a rapid and reliable test for diagnosing and also for predicting delivery interval after PPROM.
Elsayed, Ibrahim; Sayed, Sinar
2017-01-01
Ocular drug delivery systems suffer from rapid drainage, intractable corneal permeation and short dosing intervals. Transcorneal drug permeation could increase the drug availability and efficiency in the aqueous humor. The aim of this study was to develop and optimize nanostructured formulations to provide accurate doses, long contact time and enhanced drug permeation. Nanovesicles were designed based on Box–Behnken model and prepared using the thin film hydration technique. The formed nanodispersions were evaluated by measuring the particle size, polydispersity index, zeta potential, entrapment efficiency and gelation temperature. The obtained desirability values were utilized to develop an optimized nanostructured in situ gel and insert. The optimized formulations were imaged by transmission and scanning electron microscopes. In addition, rheological characters, in vitro drug diffusion, ex vivo and in vivo permeation and safety of the optimized formulation were investigated. The optimized insert formulation was found to have a relatively lower viscosity, higher diffusion, ex vivo and in vivo permeation, when compared to the optimized in situ gel. So, the lyophilized nanostructured insert could be considered as a promising carrier and transporter for drugs across the cornea with high biocompatibility and effectiveness. PMID:29133980
Robust allocation of a defensive budget considering an attacker's private information.
Nikoofal, Mohammad E; Zhuang, Jun
2012-05-01
Attackers' private information is one of the main issues in defensive resource allocation games in homeland security. The outcome of a defense resource allocation decision critically depends on the accuracy of estimations about the attacker's attributes. However, terrorists' goals may be unknown to the defender, necessitating robust decisions by the defender. This article develops a robust-optimization game-theoretical model for identifying optimal defense resource allocation strategies for a rational defender facing a strategic attacker while the attacker's valuation of targets, being the most critical attribute of the attacker, is unknown but belongs to bounded distribution-free intervals. To our best knowledge, no previous research has applied robust optimization in homeland security resource allocation when uncertainty is defined in bounded distribution-free intervals. The key features of our model include (1) modeling uncertainty in attackers' attributes, where uncertainty is characterized by bounded intervals; (2) finding the robust-optimization equilibrium for the defender using concepts dealing with budget of uncertainty and price of robustness; and (3) applying the proposed model to real data. © 2011 Society for Risk Analysis.
Photoinduced diffusion molecular transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rozenbaum, Viktor M., E-mail: vik-roz@mail.ru, E-mail: litrakh@gmail.com; Dekhtyar, Marina L.; Lin, Sheng Hsien
2016-08-14
We consider a Brownian photomotor, namely, the directed motion of a nanoparticle in an asymmetric periodic potential under the action of periodic rectangular resonant laser pulses which cause charge redistribution in the particle. Based on the kinetics for the photoinduced electron redistribution between two or three energy levels of the particle, the time dependence of its potential energy is derived and the average directed velocity is calculated in the high-temperature approximation (when the spatial amplitude of potential energy fluctuations is small relative to the thermal energy). The thus developed theory of photoinduced molecular transport appears applicable not only to conventionalmore » dichotomous Brownian motors (with only two possible potential profiles) but also to a much wider variety of molecular nanomachines. The distinction between the realistic time dependence of the potential energy and that for a dichotomous process (a step function) is represented in terms of relaxation times (they can differ on the time intervals of the dichotomous process). As shown, a Brownian photomotor has the maximum average directed velocity at (i) large laser pulse intensities (resulting in short relaxation times on laser-on intervals) and (ii) excited state lifetimes long enough to permit efficient photoexcitation but still much shorter than laser-off intervals. A Brownian photomotor with optimized parameters is exemplified by a cylindrically shaped semiconductor nanocluster which moves directly along a polar substrate due to periodically photoinduced dipole moment (caused by the repetitive excited electron transitions to a non-resonant level of the nanocylinder surface impurity).« less
Two-step chlorination: A new approach to disinfection of a primary sewage effluent.
Li, Yu; Yang, Mengting; Zhang, Xiangru; Jiang, Jingyi; Liu, Jiaqi; Yau, Cie Fu; Graham, Nigel J D; Li, Xiaoyan
2017-01-01
Sewage disinfection aims at inactivating pathogenic microorganisms and preventing the transmission of waterborne diseases. Chlorination is extensively applied for disinfecting sewage effluents. The objective of achieving a disinfection goal and reducing disinfectant consumption and operational costs remains a challenge in sewage treatment. In this study, we have demonstrated that, for the same chlorine dosage, a two-step addition of chlorine (two-step chlorination) was significantly more efficient in disinfecting a primary sewage effluent than a one-step addition of chlorine (one-step chlorination), and shown how the two-step chlorination was optimized with respect to time interval and dosage ratio. Two-step chlorination of the sewage effluent attained its highest disinfection efficiency at a time interval of 19 s and a dosage ratio of 5:1. Compared to one-step chlorination, two-step chlorination enhanced the disinfection efficiency by up to 0.81- or even 1.02-log for two different chlorine doses and contact times. An empirical relationship involving disinfection efficiency, time interval and dosage ratio was obtained by best fitting. Mechanisms (including a higher overall Ct value, an intensive synergistic effect, and a shorter recovery time) were proposed for the higher disinfection efficiency of two-step chlorination in the sewage effluent disinfection. Annual chlorine consumption costs in one-step and two-step chlorination of the primary sewage effluent were estimated. Compared to one-step chlorination, two-step chlorination reduced the cost by up to 16.7%. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vaas, Lea A I; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed '-omics' techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data.
Vaas, Lea A. I.; Sikorski, Johannes; Michael, Victoria; Göker, Markus; Klenk, Hans-Peter
2012-01-01
Background The Phenotype MicroArray (OmniLog® PM) system is able to simultaneously capture a large number of phenotypes by recording an organism's respiration over time on distinct substrates. This technique targets the object of natural selection itself, the phenotype, whereas previously addressed ‘-omics’ techniques merely study components that finally contribute to it. The recording of respiration over time, however, adds a longitudinal dimension to the data. To optimally exploit this information, it must be extracted from the shapes of the recorded curves and displayed in analogy to conventional growth curves. Methodology The free software environment R was explored for both visualizing and fitting of PM respiration curves. Approaches using either a model fit (and commonly applied growth models) or a smoothing spline were evaluated. Their reliability in inferring curve parameters and confidence intervals was compared to the native OmniLog® PM analysis software. We consider the post-processing of the estimated parameters, the optimal classification of curve shapes and the detection of significant differences between them, as well as practically relevant questions such as detecting the impact of cultivation times and the minimum required number of experimental repeats. Conclusions We provide a comprehensive framework for data visualization and parameter estimation according to user choices. A flexible graphical representation strategy for displaying the results is proposed, including 95% confidence intervals for the estimated parameters. The spline approach is less prone to irregular curve shapes than fitting any of the considered models or using the native PM software for calculating both point estimates and confidence intervals. These can serve as a starting point for the automated post-processing of PM data, providing much more information than the strict dichotomization into positive and negative reactions. Our results form the basis for a freely available R package for the analysis of PM data. PMID:22536335
NASA Astrophysics Data System (ADS)
Kasiviswanathan, K.; Sudheer, K.
2013-05-01
Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived prediction interval for a selected hydrograph in the validation data set is presented in Fig 1. It is noted that most of the observed flows lie within the constructed prediction interval, and therefore provides information about the uncertainty of the prediction. One specific advantage of the method is that when ensemble mean value is considered as a forecast, the peak flows are predicted with improved accuracy by this method compared to traditional single point forecasted ANNs. Fig. 1 Prediction Interval for selected hydrograph
DOE Office of Scientific and Technical Information (OSTI.GOV)
Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl; Valli, Lorella; Alma Mater Studiorum, Department of Physics and Astronomy, Bologna University, Bologna
Purpose: To investigate the dosimetric impact of intrafraction prostate motion and the effect of robot correction strategies for hypofractionated CyberKnife treatments with a simultaneously integrated boost. Methods and Materials: A total of 548 real-time prostate motion tracks from 17 patients were available for dosimetric simulations of CyberKnife treatments, in which various correction strategies were included. Fixed time intervals between imaging/correction (15, 60, 180, and 360 seconds) were simulated, as well as adaptive timing (ie, the time interval reduced from 60 to 15 seconds in case prostate motion exceeded 3 mm or 2° in consecutive images). The simulated extent of robot corrections was alsomore » varied: no corrections, translational corrections only, and translational corrections combined with rotational corrections up to 5°, 10°, and perfect rotational correction. The correction strategies were evaluated for treatment plans with a 0-mm or 3-mm margin around the clinical target volume (CTV). We recorded CTV coverage (V{sub 100%}) and dose-volume parameters of the peripheral zone (boost), rectum, bladder, and urethra. Results: Planned dose parameters were increasingly preserved with larger extents of robot corrections. A time interval between corrections of 60 to 180 seconds provided optimal preservation of CTV coverage. To achieve 98% CTV coverage in 98% of the treatments, translational and rotational corrections up to 10° were required for the 0-mm margin plans, whereas translational and rotational corrections up to 5° were required for the 3-mm margin plans. Rectum and bladder were spared considerably better in the 0-mm margin plans. Adaptive timing did not improve delivered dose. Conclusions: Intrafraction prostate motion substantially affected the delivered dose but was compensated for effectively by robot corrections using a time interval of 60 to 180 seconds. A 0-mm margin required larger extents of additional rotational corrections than a 3-mm margin but resulted in lower doses to rectum and bladder.« less
Sun, Mingmei; Xu, Xiao; Zhang, Qiuqin; Rui, Xin; Wu, Junjun; Dong, Mingsheng
2018-02-01
Ultrasound-assisted aqueous extraction (UAAE) was used to extract oil from Clanis bilineata (CB), a traditional edible insect that can be reared on a large scale in China, and the physicochemical property and antioxidant capacity of the UAAE-derived oil (UAAEO) were investigated for the first time. UAAE conditions of CB oil was optimized using response surface methodology (RSM) and the highest oil yield (19.47%) was obtained under optimal conditions for ultrasonic power, extraction temperature, extraction time, and ultrasonic interval time at 400 W, 40°C, 50 min, and 2 s, respectively. Compared with Soxhlet extraction-derived oil (SEO), UAAEO had lower acid (AV), peroxide (PV) and p-anisidine values (PAV) as well as higher polyunsaturated fatty acids contents and thermal stability. Furthermore, UAAEO showed stronger antioxidant activities than those of SEO, according to DPPH radical scavenging and β-carotene bleaching tests. Therefore, UAAE is a promising process for the large-scale production of CB oil and CB has a developing potential as functional oil resource.
Pitkänen, Minna; Kallioniemi, Elisa; Julkunen, Petro
2017-01-01
Repetition suppression (RS) is evident as a weakened response to repeated stimuli after the initial response. RS has been demonstrated in motor-evoked potentials (MEPs) induced with transcranial magnetic stimulation (TMS). Here, we investigated the effect of inter-train interval (ITI) on the induction of RS of MEPs with the attempt to optimize the investigative protocols. Trains of TMS pulses, targeted to the primary motor cortex by neuronavigation, were applied at a stimulation intensity of 120% of the resting motor threshold. The stimulus trains included either four or twenty pulses with an inter-stimulus interval (ISI) of 1 s. The ITI was here defined as the interval between the last pulse in a train and the first pulse in the next train; the ITIs used here were 1, 3, 4, 6, 7, 12, and 17 s. RS was observed with all ITIs except with the ITI of 1 s, in which the ITI was equal to ISI. RS was more pronounced with longer ITIs. Shorter ITIs may not allow sufficient time for a return to baseline. RS may reflect a startle-like response to the first pulse of a train followed by habituation. Longer ITIs may allow more recovery time and in turn demonstrate greater RS. Our results indicate that RS can be studied with confidence at relatively short ITIs of 6 s and above.
Simen, Patrick; Contreras, David; Buck, Cara; Hu, Peter; Holmes, Philip; Cohen, Jonathan D
2009-12-01
The drift-diffusion model (DDM) implements an optimal decision procedure for stationary, 2-alternative forced-choice tasks. The height of a decision threshold applied to accumulating information on each trial determines a speed-accuracy tradeoff (SAT) for the DDM, thereby accounting for a ubiquitous feature of human performance in speeded response tasks. However, little is known about how participants settle on particular tradeoffs. One possibility is that they select SATs that maximize a subjective rate of reward earned for performance. For the DDM, there exist unique, reward-rate-maximizing values for its threshold and starting point parameters in free-response tasks that reward correct responses (R. Bogacz, E. Brown, J. Moehlis, P. Holmes, & J. D. Cohen, 2006). These optimal values vary as a function of response-stimulus interval, prior stimulus probability, and relative reward magnitude for correct responses. We tested the resulting quantitative predictions regarding response time, accuracy, and response bias under these task manipulations and found that grouped data conformed well to the predictions of an optimally parameterized DDM.
Characterization and optimization of spiral eddy current coils for in-situ crack detection
NASA Astrophysics Data System (ADS)
Mandache, Catalin
2018-03-01
In-situ condition-based maintenance is making strides in the aerospace industry and it is seen as an alternative to scheduled, time-based maintenance. With fatigue cracks originating from fastener holes as the main reason for structural failures, embedded eddy current coils are a viable non-invasive solution for their timely detection. The development and potential broad use of these coils are motivated by a few consistent arguments: (i) inspection of structures of complicated geometries and hard to access areas, that often require disassembly, (ii) alternative to regular inspection actions that could introduce inadvertent damage, (iii) for structures that have short inspection intervals, and (iv) for repaired structures where fastener holes contain bushings and prevent further bolt-hole inspections. Since the spiral coils are aiming at detecting radial cracks emanating from the fastener holes, their design parameters should allow for high inductance, low ohmic losses and power requirements, as well as optimal size and high sensitivity to discontinuities. In this study, flexible, surface conformable, spiral eddy current coils are empirically investigated on mock-up specimens, while numerical analysis is performed for their optimization and design improvement.
A general theory of intertemporal decision-making and the perception of time.
Namboodiri, Vijay M K; Mihalas, Stefan; Marton, Tanya M; Hussain Shuler, Marshall G
2014-01-01
Animals and humans make decisions based on their expected outcomes. Since relevant outcomes are often delayed, perceiving delays and choosing between earlier vs. later rewards (intertemporal decision-making) is an essential component of animal behavior. The myriad observations made in experiments studying intertemporal decision-making and time perception have not yet been rationalized within a single theory. Here we present a theory-Training-Integrated Maximized Estimation of Reinforcement Rate (TIMERR)-that explains a wide variety of behavioral observations made in intertemporal decision-making and the perception of time. Our theory postulates that animals make intertemporal choices to optimize expected reward rates over a limited temporal window which includes a past integration interval-over which experienced reward rate is estimated-as well as the expected delay to future reward. Using this theory, we derive mathematical expressions for both the subjective value of a delayed reward and the subjective representation of the delay. A unique contribution of our work is in finding that the past integration interval directly determines the steepness of temporal discounting and the non-linearity of time perception. In so doing, our theory provides a single framework to understand both intertemporal decision-making and time perception.
Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds
NASA Astrophysics Data System (ADS)
Sirch, Tobias; Bugliaro, Luca
2015-04-01
Optimal Exploitation of the Temporal and Spatial Resolution of SEVIRI for the Nowcasting of Clouds An algorithm was developed to forecast the development of water and ice clouds for the successive 5-120 minutes separately using satellite data from SEVIRI (Spinning Enhanced Visible and Infrared Imager) aboard Meteosat Second Generation (MSG). In order to derive cloud cover, optical thickness and cloud top height of high ice clouds "The Cirrus Optical properties derived from CALIOP and SEVIRI during day and night" (COCS, Kox et al. [2014]) algorithm is applied. For the determination of the liquid water clouds the APICS ("Algorithm for the Physical Investigation of Clouds with SEVIRI", Bugliaro e al. [2011]) cloud algorithm is used, which provides cloud cover, optical thickness and effective radius. The forecast rests upon an optical flow method determining a motion vector field from two satellite images [Zinner et al., 2008.] With the aim of determining the ideal time separation of the satellite images that are used for the determination of the cloud motion vector field for every forecast horizon time the potential of the better temporal resolution of the Meteosat Rapid Scan Service (5 instead of 15 minutes repetition rate) has been investigated. Therefore for the period from March to June 2013 forecasts up to 4 hours in time steps of 5 min based on images separated by a time interval of 5 min, 10 min, 15 min, 30 min have been created. The results show that Rapid Scan data produces a small reduction of errors for a forecast horizon up to 30 minutes. For the following time steps forecasts generated with a time interval of 15 min should be used and for forecasts up to several hours computations with a time interval of 30 min provide the best results. For a better spatial resolution the HRV channel (High Resolution Visible, 1km instead of 3km maximum spatial resolution at the subsatellite point) has been integrated into the forecast. To detect clouds the difference of the measured albedo from SEVIRI and the clear-sky albedo provided by MODIS has been used and additionally the temporal development of this quantity. A pre-requisite for this work was an adjustment of the geolocation accuracy for MSG and MODIS by shifting the MODIS data and quantifying the correlation between both data sets.
da Costa, D W; Dijksman, L M; Bouwense, S A; Schepers, N J; Besselink, M G; van Santvoort, H C; Boerma, D; Gooszen, H G; Dijkgraaf, M G W
2016-11-01
Same-admission cholecystectomy is indicated after gallstone pancreatitis to reduce the risk of recurrent disease or other gallstone-related complications, but its impact on overall costs is unclear. This study analysed the cost-effectiveness of same-admission versus interval cholecystectomy after mild gallstone pancreatitis. In a multicentre RCT (Pancreatitis of biliary Origin: optimal timiNg of CHOlecystectomy; PONCHO) patients with mild gallstone pancreatitis were randomized before discharge to either cholecystectomy within 72 h (same-admission cholecystectomy) or cholecystectomy after 25-30 days (interval cholecystectomy). Healthcare use of all patients was recorded prospectively using clinical report forms. Unit costs of resources used were determined, and patients completed multiple Health and Labour Questionnaires to record pancreatitis-related absence from work. Cost-effectiveness analyses were performed from societal and healthcare perspectives, with the costs per readmission prevented as primary outcome with a time horizon of 6 months. All 264 trial participants were included in the present analysis, 128 randomized to same-admission cholecystectomy and 136 to interval cholecystectomy. Same-admission cholecystectomy reduced the risk of acute readmission for recurrent gallstone-related complications from 16·9 to 4·7 per cent (P = 0·002). Mean total costs from a societal perspective were €234 (95 per cent c.i. -1249 to 738) less per patient in the same-admission cholecystectomy group. Same-admission cholecystectomy was superior to interval cholecystectomy, with a societal incremental cost-effectiveness ratio of -€1918 to prevent one readmission for gallstone-related complications. In mild biliary pancreatitis, same-admission cholecystectomy was more effective and less costly than interval cholecystectomy. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
Real coded genetic algorithm for fuzzy time series prediction
NASA Astrophysics Data System (ADS)
Jain, Shilpa; Bisht, Dinesh C. S.; Singh, Phool; Mathpal, Prakash C.
2017-10-01
Genetic Algorithm (GA) forms a subset of evolutionary computing, rapidly growing area of Artificial Intelligence (A.I.). Some variants of GA are binary GA, real GA, messy GA, micro GA, saw tooth GA, differential evolution GA. This research article presents a real coded GA for predicting enrollments of University of Alabama. Data of Alabama University is a fuzzy time series. Here, fuzzy logic is used to predict enrollments of Alabama University and genetic algorithm optimizes fuzzy intervals. Results are compared to other eminent author works and found satisfactory, and states that real coded GA are fast and accurate.
NASA Astrophysics Data System (ADS)
Min, Huang; Na, Cai
2017-06-01
These years, ant colony algorithm has been widely used in solving the domain of discrete space optimization, while the research on solving the continuous space optimization was relatively little. Based on the original optimization for continuous space, the article proposes the improved ant colony algorithm which is used to Solve the optimization for continuous space, so as to overcome the ant colony algorithm’s disadvantages of searching for a long time in continuous space. The article improves the solving way for the total amount of information of each interval and the due number of ants. The article also introduces a function of changes with the increase of the number of iterations in order to enhance the convergence rate of the improved ant colony algorithm. The simulation results show that compared with the result in literature[5], the suggested improved ant colony algorithm that based on the information distribution function has a better convergence performance. Thus, the article provides a new feasible and effective method for ant colony algorithm to solve this kind of problem.
Li, Zukui; Floudas, Christodoulos A.
2012-01-01
Probabilistic guarantees on constraint satisfaction for robust counterpart optimization are studied in this paper. The robust counterpart optimization formulations studied are derived from box, ellipsoidal, polyhedral, “interval+ellipsoidal” and “interval+polyhedral” uncertainty sets (Li, Z., Ding, R., and Floudas, C.A., A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear and Robust Mixed Integer Linear Optimization, Ind. Eng. Chem. Res, 2011, 50, 10567). For those robust counterpart optimization formulations, their corresponding probability bounds on constraint satisfaction are derived for different types of uncertainty characteristic (i.e., bounded or unbounded uncertainty, with or without detailed probability distribution information). The findings of this work extend the results in the literature and provide greater flexibility for robust optimization practitioners in choosing tighter probability bounds so as to find less conservative robust solutions. Extensive numerical studies are performed to compare the tightness of the different probability bounds and the conservatism of different robust counterpart optimization formulations. Guiding rules for the selection of robust counterpart optimization models and for the determination of the size of the uncertainty set are discussed. Applications in production planning and process scheduling problems are presented. PMID:23329868
Harte, Philip T.
2017-01-01
A common assumption with groundwater sampling is that low (<0.5 L/min) pumping rates during well purging and sampling captures primarily lateral flow from the formation through the well-screened interval at a depth coincident with the pump intake. However, if the intake is adjacent to a low hydraulic conductivity part of the screened formation, this scenario will induce vertical groundwater flow to the pump intake from parts of the screened interval with high hydraulic conductivity. Because less formation water will initially be captured during pumping, a substantial volume of water already in the well (preexisting screen water or screen storage) will be captured during this initial time until inflow from the high hydraulic conductivity part of the screened formation can travel vertically in the well to the pump intake. Therefore, the length of the time needed for adequate purging prior to sample collection (called optimal purge duration) is controlled by the in-well, vertical travel times. A preliminary, simple analytical model was used to provide information on the relation between purge duration and capture of formation water for different gross levels of heterogeneity (contrast between low and high hydraulic conductivity layers). The model was then used to compare these time–volume relations to purge data (pumping rates and drawdown) collected at several representative monitoring wells from multiple sites. Results showed that computation of time-dependent capture of formation water (as opposed to capture of preexisting screen water), which were based on vertical travel times in the well, compares favorably with the time required to achieve field parameter stabilization. If field parameter stabilization is an indicator of arrival time of formation water, which has been postulated, then in-well, vertical flow may be an important factor at wells where low-flow sampling is the sample method of choice.
Tandonnet, Christophe; Davranche, Karen; Meynier, Chloé; Burle, Borís; Vidal, Franck; Hasbroucq, Thierry
2012-02-01
We investigated the influence of temporal preparation on information processing. Single-pulse transcranial magnetic stimulation (TMS) of the primary motor cortex was delivered during a between-hand choice task. The time interval between the warning and the imperative stimulus varied across blocks of trials was either optimal (500 ms) or nonoptimal (2500 ms) for participants' performance. Silent period duration was shorter prior to the first evidence of response selection for the optimal condition. Amplitude of the motor evoked potential specific to the responding hand increased earlier for the optimal condition. These results revealed an early release of cortical inhibition and a faster integration of the response selection-related inputs to the corticospinal pathway when temporal preparation is better. Temporal preparation may induce cortical activation prior to response selection that speeds up the implementation of the selected response. Copyright © 2011 Society for Psychophysiological Research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, J; Deasy, J O
Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-killmore » was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.« less
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.
Crozier, Jennifer; Roig, Marc; Eng, Janice J; MacKay-Lyons, Marilyn; Fung, Joyce; Ploughman, Michelle; Bailey, Damian M; Sweet, Shane N; Giacomantonio, Nicholas; Thiel, Alexander; Trivino, Michael; Tang, Ada
2018-04-01
Stroke is the leading cause of adult disability. Individuals poststroke possess less than half of the cardiorespiratory fitness (CRF) as their nonstroke counterparts, leading to inactivity, deconditioning, and an increased risk of cardiovascular events. Preserving cardiovascular health is critical to lower stroke risk; however, stroke rehabilitation typically provides limited opportunity for cardiovascular exercise. Optimal cardiovascular training parameters to maximize recovery in stroke survivors also remains unknown. While stroke rehabilitation recommendations suggest the use of moderate-intensity continuous exercise (MICE) to improve CRF, neither is it routinely implemented in clinical practice, nor is the intensity always sufficient to elicit a training effect. High-intensity interval training (HIIT) has emerged as a potentially effective alternative that encompasses brief high-intensity bursts of exercise interspersed with bouts of recovery, aiming to maximize cardiovascular exercise intensity in a time-efficient manner. HIIT may provide an alternative exercise intervention and invoke more pronounced benefits poststroke. To provide an updated review of HIIT poststroke through ( a) synthesizing current evidence; ( b) proposing preliminary considerations of HIIT parameters to optimize benefit; ( c) discussing potential mechanisms underlying changes in function, cardiovascular health, and neuroplasticity following HIIT; and ( d) discussing clinical implications and directions for future research. Preliminary evidence from 10 studies report HIIT-associated improvements in functional, cardiovascular, and neuroplastic outcomes poststroke; however, optimal HIIT parameters remain unknown. Larger randomized controlled trials are necessary to establish ( a) effectiveness, safety, and optimal training parameters within more heterogeneous poststroke populations; (b) potential mechanisms of HIIT-associated improvements; and ( c) adherence and psychosocial outcomes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Qifang; Wang, Fei; Hodge, Bri-Mathias
A real-time price (RTP)-based automatic demand response (ADR) strategy for PV-assisted electric vehicle (EV) Charging Station (PVCS) without vehicle to grid is proposed. The charging process is modeled as a dynamic linear program instead of the normal day-ahead and real-time regulation strategy, to capture the advantages of both global and real-time optimization. Different from conventional price forecasting algorithms, a dynamic price vector formation model is proposed based on a clustering algorithm to form an RTP vector for a particular day. A dynamic feasible energy demand region (DFEDR) model considering grid voltage profiles is designed to calculate the lower and uppermore » bounds. A deduction method is proposed to deal with the unknown information of future intervals, such as the actual stochastic arrival and departure times of EVs, which make the DFEDR model suitable for global optimization. Finally, both the comparative cases articulate the advantages of the developed methods and the validity in reducing electricity costs, mitigating peak charging demand, and improving PV self-consumption of the proposed strategy are verified through simulation scenarios.« less
NASA Astrophysics Data System (ADS)
Hayana Hasibuan, Eka; Mawengkang, Herman; Efendi, Syahril
2017-12-01
The use of Partical Swarm Optimization Algorithm in this research is to optimize the feature weights on the Voting Feature Interval 5 algorithm so that we can find the model of using PSO algorithm with VFI 5. Optimization of feature weight on Diabetes or Dyspesia data is considered important because it is very closely related to the livelihood of many people, so if there is any inaccuracy in determining the most dominant feature weight in the data will cause death. Increased accuracy by using PSO Algorithm ie fold 1 from 92.31% to 96.15% increase accuracy of 3.8%, accuracy of fold 2 on Algorithm VFI5 of 92.52% as well as generated on PSO Algorithm means accuracy fixed, then in fold 3 increase accuracy of 85.19% Increased to 96.29% Accuracy increased by 11%. The total accuracy of all three trials increased by 14%. In general the Partical Swarm Optimization algorithm has succeeded in increasing the accuracy to several fold, therefore it can be concluded the PSO algorithm is well used in optimizing the VFI5 Classification Algorithm.
Multifactor analysis of multiscaling in volatility return intervals.
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H Eugene
2009-01-01
We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals tau , which are time intervals between volatilities above a given threshold q . We explore the probability density function of tau , P_(q)(tau) , assuming a stretched exponential function, P_(q)(tau) approximately e;(-tau;(gamma)) . We find that the exponent gamma depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how gamma depends on four essential factors, capitalization, risk, number of trades, and return. We show that gamma depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that gamma relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of tau , mu_(m) identical with(tautau);(m);(1m) , in the range of 10
Multifactor analysis of multiscaling in volatility return intervals
NASA Astrophysics Data System (ADS)
Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene
2009-01-01
We study the volatility time series of 1137 most traded stocks in the U.S. stock markets for the two-year period 2001-2002 and analyze their return intervals τ , which are time intervals between volatilities above a given threshold q . We explore the probability density function of τ , Pq(τ) , assuming a stretched exponential function, Pq(τ)˜e-τγ . We find that the exponent γ depends on the threshold in the range between q=1 and 6 standard deviations of the volatility. This finding supports the multiscaling nature of the return interval distribution. To better understand the multiscaling origin, we study how γ depends on four essential factors, capitalization, risk, number of trades, and return. We show that γ depends on the capitalization, risk, and return but almost does not depend on the number of trades. This suggests that γ relates to the portfolio selection but not on the market activity. To further characterize the multiscaling of individual stocks, we fit the moments of τ , μm≡⟨(τ/⟨τ⟩)m⟩1/m , in the range of 10<⟨τ⟩⩽100 by a power law, μm˜⟨τ⟩δ . The exponent δ is found also to depend on the capitalization, risk, and return but not on the number of trades, and its tendency is opposite to that of γ . Moreover, we show that δ decreases with increasing γ approximately by a linear relation. The return intervals demonstrate the temporal structure of volatilities and our findings suggest that their multiscaling features may be helpful for portfolio optimization.
Maze, M J; Paynter, J; Chiu, W; Hu, R; Nisbet, M; Lewis, C
2016-07-01
There is uncertainty as to the optimal therapeutic concentrations of anti-tuberculosis drugs to achieve cure. To characterise the use of therapeutic drug monitoring (TDM), and identify risk factors and outcomes for those with concentrations below the drug interval. Patients treated for tuberculosis (TB) who had rifampicin (RMP) or isoniazid (INH) concentrations measured between 1 January 2005 and 31 December 2012 were studied retrospectively. Matched concentrations and drug dosing time were assessed according to contemporary regional drug intervals (RMP > 6 μmol/l, INH > 7.5 μmol/l) and current international recommendations (RMP > 10 μmol/l, INH > 22 μmol/l). Outcomes were assessed using World Health Organization criteria. Of 865 patients, 121 had concentrations of either or both medications. RMP concentrations were within the regional drug intervals in 106/114 (93%) and INH in 91/100 (91%). Concentrations were within international drug intervals for RMP in 76/114 (67%) and INH in 53/100 (53%). Low weight-based dose was the only statistically significant risk factor for concentrations below the drug interval. Of the 35 patients with low concentrations, 21 were cured, 9 completed treatment and 5 transferred out. There were no relapses during follow-up (mean 66.5 months). There were no clinically useful characteristics to guide use of TDM. Many patients had concentrations below international therapeutic intervals, but were successfully treated.
Evaluation of a patient navigation program.
Koh, Catherine; Nelson, Joan M; Cook, Paul F
2011-02-01
This study examined the value and effectiveness of a patient navigation program in terms of timeliness of access to cancer care, resolution of barriers, and satisfaction in 55 patients over a six-month period. Although not statistically significant, the time interval between diagnostic biopsy to first consultation with a cancer specialist after program implementation was reduced from an average of 14.6 days to 12.8 days. The time interval between diagnostic biopsy to initiation of cancer treatment also was reduced from 30 days to 26.2 days (not statistically significant). In addition, 71% of patient barriers were resolved by the time treatment was initiated. Overall, patients were highly satisfied with their navigated care experience. Consistent evaluation and monitoring of quality-of-care indicators are critical to further develop the program and to direct resource allocation. Oncology nurses participating in patient navigation programs should be encouraged to evaluate their importance and impact in this developing concept. Nurses should seek roles that allow them to optimize the effective use of their specialized knowledge and skills to the benefit of patients along the cancer care continuum.
Optimal control of laser-induced spin-orbit mediated ultrafast demagnetization
NASA Astrophysics Data System (ADS)
Elliott, P.; Krieger, K.; Dewhurst, J. K.; Sharma, S.; Gross, E. K. U.
2016-01-01
Laser induced ultrafast demagnetization is the process whereby the magnetic moment of a ferromagnetic material is seen to drop significantly on a timescale of 10-100 s of femtoseconds due to the application of a strong laser pulse. If this phenomenon can be harnessed for future technology, it offers the possibility for devices operating at speeds several orders of magnitude faster than at present. A key component to successful transfer of such a process to technology is the controllability of the process, i.e. that it can be tuned in order to overcome the practical and physical limitations imposed on the system. In this paper, we demonstrate that the spin-orbit mediated form of ultrafast demagnetization recently investigated (Krieger et al 2015 J. Chem. Theory Comput. 11 4870) by ab initio time-dependent density functional theory (TDDFT) can be controlled. To do so we use quantum optimal control theory (OCT) to couple our TDDFT simulations to the optimization machinery of OCT. We show that a laser pulse can be found which maximizes the loss of moment within a given time interval while subject to several practical and physical constraints. Furthermore we also include a constraint on the fluence of the laser pulses and find the optimal pulse that combines significant demagnetization with a desire for less powerful pulses. These calculations demonstrate optimal control is possible for spin-orbit mediated ultrafast demagnetization and lays the foundation for future optimizations/simulations which can incorporate even more constraints.
A Dynamic Scheduling Method of Earth-Observing Satellites by Employing Rolling Horizon Strategy
Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma
2013-01-01
Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments. PMID:23690742
A dynamic scheduling method of Earth-observing satellites by employing rolling horizon strategy.
Dishan, Qiu; Chuan, He; Jin, Liu; Manhao, Ma
2013-01-01
Focused on the dynamic scheduling problem for earth-observing satellites (EOS), an integer programming model is constructed after analyzing the main constraints. The rolling horizon (RH) strategy is proposed according to the independent arriving time and deadline of the imaging tasks. This strategy is designed with a mixed triggering mode composed of periodical triggering and event triggering, and the scheduling horizon is decomposed into a series of static scheduling intervals. By optimizing the scheduling schemes in each interval, the dynamic scheduling of EOS is realized. We also propose three dynamic scheduling algorithms by the combination of the RH strategy and various heuristic algorithms. Finally, the scheduling results of different algorithms are compared and the presented methods in this paper are demonstrated to be efficient by extensive experiments.
Discretized energy minimization in a wave guide with point sources
NASA Technical Reports Server (NTRS)
Propst, G.
1994-01-01
An anti-noise problem on a finite time interval is solved by minimization of a quadratic functional on the Hilbert space of square integrable controls. To this end, the one-dimensional wave equation with point sources and pointwise reflecting boundary conditions is decomposed into a system for the two propagating components of waves. Wellposedness of this system is proved for a class of data that includes piecewise linear initial conditions and piecewise constant forcing functions. It is shown that for such data the optimal piecewise constant control is the solution of a sparse linear system. Methods for its computational treatment are presented as well as examples of their applicability. The convergence of discrete approximations to the general optimization problem is demonstrated by finite element methods.
Evolution of motion uncertainty in rectal cancer: implications for adaptive radiotherapy
NASA Astrophysics Data System (ADS)
Kleijnen, Jean-Paul J. E.; van Asselen, Bram; Burbach, Johannes P. M.; Intven, Martijn; Philippens, Marielle E. P.; Reerink, Onne; Lagendijk, Jan J. W.; Raaymakers, Bas W.
2016-01-01
Reduction of motion uncertainty by applying adaptive radiotherapy strategies depends largely on the temporal behavior of this motion. To fully optimize adaptive strategies, insight into target motion is needed. The purpose of this study was to analyze stability and evolution in time of motion uncertainty of both the gross tumor volume (GTV) and clinical target volume (CTV) for patients with rectal cancer. We scanned 16 patients daily during one week, on a 1.5 T MRI scanner in treatment position, prior to each radiotherapy fraction. Single slice sagittal cine MRIs were made at the beginning, middle, and end of each scan session, for one minute at 2 Hz temporal resolution. GTV and CTV motion were determined by registering a delineated reference frame to time-points later in time. The 95th percentile of observed motion (dist95%) was taken as a measure of motion. The stability of motion in time was evaluated within each cine-MRI separately. The evolution of motion was investigated between the reference frame and the cine-MRIs of a single scan session and between the reference frame and the cine-MRIs of several days later in the course of treatment. This observed motion was then converted into a PTV-margin estimate. Within a one minute cine-MRI scan, motion was found to be stable and small. Independent of the time-point within the scan session, the average dist95% remains below 3.6 mm and 2.3 mm for CTV and GTV, respectively 90% of the time. We found similar motion over time intervals from 18 min to 4 days. When reducing the time interval from 18 min to 1 min, a large reduction in motion uncertainty is observed. A reduction in motion uncertainty, and thus the PTV-margin estimate, of 71% and 75% for CTV and tumor was observed, respectively. Time intervals of 15 and 30 s yield no further reduction in motion uncertainty compared to a 1 min time interval.
deBruyn, Jennifer C; Jacobson, Kevan; El-Matary, Wael; Carroll, Matthew; Wine, Eytan; Wrobel, Iwona; Van Woudenberg, Mariel; Huynh, Hien Q
2018-02-01
Data on long-term real-world outcomes of infliximab in pediatric Crohn disease are limited. The aim of the study was to evaluate infliximab optimization and durability in children with Crohn disease. We performed a retrospective review of children with Crohn disease who started infliximab from January 2008 to December 2012 in 4 Canadian tertiary care centers. A priori factors associated with optimization and discontinuation from loss of response were evaluated using logistic regression and Cox proportional hazards model, respectively. One hundred eighty children (54.4% boys) started infliximab; all completed induction. Median age at infliximab start was 14.3 years (Q1, Q3: 12.8, 15.9 years) and median time from diagnosis to infliximab start was 1.5 years (Q1, Q3: 0.6, 3.5 years). At last follow-up, 87.1% were maintained on infliximab (median duration follow-up 85.9 weeks [Q1, Q3: 43.8, 138.8 weeks]). Infliximab optimization occurred in 57.3% (dose escalation 15.2%, interval shortening 3.9%, both 38.2%), primarily due to loss of response. Younger age at diagnosis (<10 years old) and nonstricturing, nonpenetrating behavior were associated with optimization (odds ratio 6.5, 95% confidence interval [CI] 2.0-21.1 and odds ratio 2.1, 95% CI 1.0-4.2, respectively). The 1- and 2-year durability of infliximab (percentage in follow-up who were continuing on infliximab) were 95.5% (95% CI 90.4-98.3) and 91.0% (95% CI 82.4-96.3), respectively. Annual discontinuation due to loss of response occurred at 3.2% per year (95% CI 1.1-5.2). Children with Crohn disease maintain a durable response to infliximab. Optimization occurs frequently and allows for continued use. Younger age at diagnosis and nonstricturing, nonpenetrating behavior are associated with increased need for infliximab optimization.
Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W
2016-05-01
In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.
Color image enhancement based on particle swarm optimization with Gaussian mixture
NASA Astrophysics Data System (ADS)
Kattakkalil Subhashdas, Shibudas; Choi, Bong-Seok; Yoo, Ji-Hoon; Ha, Yeong-Ho
2015-01-01
This paper proposes a Gaussian mixture based image enhancement method which uses particle swarm optimization (PSO) to have an edge over other contemporary methods. The proposed method uses the guassian mixture model to model the lightness histogram of the input image in CIEL*a*b* space. The intersection points of the guassian components in the model are used to partition the lightness histogram. . The enhanced lightness image is generated by transforming the lightness value in each interval to appropriate output interval according to the transformation function that depends on PSO optimized parameters, weight and standard deviation of Gaussian component and cumulative distribution of the input histogram interval. In addition, chroma compensation is applied to the resulting image to reduce washout appearance. Experimental results show that the proposed method produces a better enhanced image compared to the traditional methods. Moreover, the enhanced image is free from several side effects such as washout appearance, information loss and gradation artifacts.
Personalized glucose-insulin model based on signal analysis.
Goede, Simon L; de Galan, Bastiaan E; Leow, Melvin Khee Shing
2017-04-21
Glucose plasma measurements for diabetes patients are generally presented as a glucose concentration-time profile with 15-60min time scale intervals. This limited resolution obscures detailed dynamic events of glucose appearance and metabolism. Measurement intervals of 15min or more could contribute to imperfections in present diabetes treatment. High resolution data from mixed meal tolerance tests (MMTT) for 24 type 1 and type 2 diabetes patients were used in our present modeling. We introduce a model based on the physiological properties of transport, storage and utilization. This logistic approach follows the principles of electrical network analysis and signal processing theory. The method mimics the physiological equivalent of the glucose homeostasis comprising the meal ingestion, absorption via the gastrointestinal tract (GIT) to the endocrine nexus between the liver, pancreatic alpha and beta cells. This model demystifies the metabolic 'black box' by enabling in silico simulations and fitting of individual responses to clinical data. Five-minute intervals MMTT data measured from diabetic subjects result in two independent model parameters that characterize the complete glucose system response at a personalized level. From the individual data measurements, we obtain a model which can be analyzed with a standard electrical network simulator for diagnostics and treatment optimization. The insulin dosing time scale can be accurately adjusted to match the individual requirements of characterized diabetic patients without the physical burden of treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated Dynamic Demand Response Implementation on a Micro-grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Chelmis, Charalampos
In this paper, we describe a system for real-time automated Dynamic and Sustainable Demand Response with sparse data consumption prediction implemented on the University of Southern California campus microgrid. Supply side approaches to resolving energy supply-load imbalance do not work at high levels of renewable energy penetration. Dynamic Demand Response (D 2R) is a widely used demand-side technique to dynamically adjust electricity consumption during peak load periods. Our D 2R system consists of accurate machine learning based energy consumption forecasting models that work with sparse data coupled with fast and sustainable load curtailment optimization algorithms that provide the ability tomore » dynamically adapt to changing supply-load imbalances in near real-time. Our Sustainable DR (SDR) algorithms attempt to distribute customer curtailment evenly across sub-intervals during a DR event and avoid expensive demand peaks during a few sub-intervals. It also ensures that each customer is penalized fairly in order to achieve the targeted curtailment. We develop near linear-time constant-factor approximation algorithms along with Polynomial Time Approximation Schemes (PTAS) for SDR curtailment that minimizes the curtailment error defined as the difference between the target and achieved curtailment values. Our SDR curtailment problem is formulated as an Integer Linear Program that optimally matches customers to curtailment strategies during a DR event while also explicitly accounting for customer strategy switching overhead as a constraint. We demonstrate the results of our D 2R system using real data from experiments performed on the USC smartgrid and show that 1) our prediction algorithms can very accurately predict energy consumption even with noisy or missing data and 2) our curtailment algorithms deliver DR with extremely low curtailment errors in the 0.01-0.05 kWh range.« less
Manual control models of industrial management
NASA Technical Reports Server (NTRS)
Crossman, E. R. F. W.
1972-01-01
The industrial engineer is often required to design and implement control systems and organization for manufacturing and service facilities, to optimize quality, delivery, and yield, and minimize cost. Despite progress in computer science most such systems still employ human operators and managers as real-time control elements. Manual control theory should therefore be applicable to at least some aspects of industrial system design and operations. Formulation of adequate model structures is an essential prerequisite to progress in this area; since real-world production systems invariably include multilevel and multiloop control, and are implemented by timeshared human effort. A modular structure incorporating certain new types of functional element, has been developed. This forms the basis for analysis of an industrial process operation. In this case it appears that managerial controllers operate in a discrete predictive mode based on fast time modelling, with sampling interval related to plant dynamics. Successive aggregation causes reduced response bandwidth and hence increased sampling interval as a function of level.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit.
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days.
Lifetime Estimation of the Upper Stage of GSAT-14 in Geostationary Transfer Orbit
Jeyakodi David, Jim Fletcher; Sharma, Ram Krishan
2014-01-01
The combination of atmospheric drag and lunar and solar perturbations in addition to Earth's oblateness influences the orbital lifetime of an upper stage in geostationary transfer orbit (GTO). These high eccentric orbits undergo fluctuations in both perturbations and velocity and are very sensitive to the initial conditions. The main objective of this paper is to predict the reentry time of the upper stage of the Indian geosynchronous satellite launch vehicle, GSLV-D5, which inserted the satellite GSAT-14 into a GTO on January 05, 2014, with mean perigee and apogee altitudes of 170 km and 35975 km. Four intervals of near linear variation of the mean apogee altitude observed were used in predicting the orbital lifetime. For these four intervals, optimal values of the initial osculating eccentricity and ballistic coefficient for matching the mean apogee altitudes were estimated with the response surface methodology using a genetic algorithm. It was found that the orbital lifetime from these four time spans was between 144 and 148 days. PMID:27437491
Shot Peening Numerical Simulation of Aircraft Aluminum Alloy Structure
NASA Astrophysics Data System (ADS)
Liu, Yong; Lv, Sheng-Li; Zhang, Wei
2018-03-01
After shot peening, the 7050 aluminum alloy has good anti-fatigue and anti-stress corrosion properties. In the shot peening process, the pellet collides with target material randomly, and generated residual stress distribution on the target material surface, which has great significance to improve material property. In this paper, a simplified numerical simulation model of shot peening was established. The influence of pellet collision velocity, pellet collision position and pellet collision time interval on the residual stress of shot peening was studied, which is simulated by the ANSYS/LS-DYNA software. The analysis results show that different velocity, different positions and different time intervals have great influence on the residual stress after shot peening. Comparing with the numerical simulation results based on Kriging model, the accuracy of the simulation results in this paper was verified. This study provides a reference for the optimization of the shot peening process, and makes an effective exploration for the precise shot peening numerical simulation.
Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.
2017-01-01
Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803
Optimization of Angular-Momentum Biases of Reaction Wheels
NASA Technical Reports Server (NTRS)
Lee, Clifford; Lee, Allan
2008-01-01
RBOT [RWA Bias Optimization Tool (wherein RWA signifies Reaction Wheel Assembly )] is a computer program designed for computing angular momentum biases for reaction wheels used for providing spacecraft pointing in various directions as required for scientific observations. RBOT is currently deployed to support the Cassini mission to prevent operation of reaction wheels at unsafely high speeds while minimizing time in undesirable low-speed range, where elasto-hydrodynamic lubrication films in bearings become ineffective, leading to premature bearing failure. The problem is formulated as a constrained optimization problem in which maximum wheel speed limit is a hard constraint and a cost functional that increases as speed decreases below a low-speed threshold. The optimization problem is solved using a parametric search routine known as the Nelder-Mead simplex algorithm. To increase computational efficiency for extended operation involving large quantity of data, the algorithm is designed to (1) use large time increments during intervals when spacecraft attitudes or rates of rotation are nearly stationary, (2) use sinusoidal-approximation sampling to model repeated long periods of Earth-point rolling maneuvers to reduce computational loads, and (3) utilize an efficient equation to obtain wheel-rate profiles as functions of initial wheel biases based on conservation of angular momentum (in an inertial frame) using pre-computed terms.
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
Optimal iodine staining of cardiac tissue for X-ray computed tomography.
Butters, Timothy D; Castro, Simon J; Lowe, Tristan; Zhang, Yanmin; Lei, Ming; Withers, Philip J; Zhang, Henggui
2014-01-01
X-ray computed tomography (XCT) has been shown to be an effective imaging technique for a variety of materials. Due to the relatively low differential attenuation of X-rays in biological tissue, a high density contrast agent is often required to obtain optimal contrast. The contrast agent, iodine potassium iodide ([Formula: see text]), has been used in several biological studies to augment the use of XCT scanning. Recently I2KI was used in XCT scans of animal hearts to study cardiac structure and to generate 3D anatomical computer models. However, to date there has been no thorough study into the optimal use of I2KI as a contrast agent in cardiac muscle with respect to the staining times required, which has been shown to impact significantly upon the quality of results. In this study we address this issue by systematically scanning samples at various stages of the staining process. To achieve this, mouse hearts were stained for up to 58 hours and scanned at regular intervals of 6-7 hours throughout this process. Optimal staining was found to depend upon the thickness of the tissue; a simple empirical exponential relationship was derived to allow calculation of the required staining time for cardiac samples of an arbitrary size.
Xu, Daolin; Lu, Fangfang
2006-12-01
We address the problem of reconstructing a set of nonlinear differential equations from chaotic time series. A method that combines the implicit Adams integration and the structure-selection technique of an error reduction ratio is proposed for system identification and corresponding parameter estimation of the model. The structure-selection technique identifies the significant terms from a pool of candidates of functional basis and determines the optimal model through orthogonal characteristics on data. The technique with the Adams integration algorithm makes the reconstruction available to data sampled with large time intervals. Numerical experiment on Lorenz and Rossler systems shows that the proposed strategy is effective in global vector field reconstruction from noisy time series.
Artificial Immune System Approach for Airborne Vehicle Maneuvering
NASA Technical Reports Server (NTRS)
Kaneshige, John T. (Inventor); Krishnakumar, Kalmanje S. (Inventor)
2014-01-01
A method and system for control of a first aircraft relative to a second aircraft. A desired location and desired orientation are estimated for the first aircraft, relative to the second aircraft, at a subsequent time, t=t2, subsequent to the present time, t=t1, where the second aircraft continues its present velocity during a subsequent time interval, t1.ltoreq.t.ltoreq.t2, or takes evasive action. Action command sequences are examined, and an optimal sequence is chosen to bring the first aircraft to the desired location and desired orientation relative to the second aircraft at time t=t2. The method applies to control of combat aircraft and/or of aircraft in a congested airspace.
Grey fuzzy optimization model for water quality management of a river system
NASA Astrophysics Data System (ADS)
Karmakar, Subhankar; Mujumdar, P. P.
2006-07-01
A grey fuzzy optimization model is developed for water quality management of river system to address uncertainty involved in fixing the membership functions for different goals of Pollution Control Agency (PCA) and dischargers. The present model, Grey Fuzzy Waste Load Allocation Model (GFWLAM), has the capability to incorporate the conflicting goals of PCA and dischargers in a deterministic framework. The imprecision associated with specifying the water quality criteria and fractional removal levels are modeled in a fuzzy mathematical framework. To address the imprecision in fixing the lower and upper bounds of membership functions, the membership functions themselves are treated as fuzzy in the model and the membership parameters are expressed as interval grey numbers, a closed and bounded interval with known lower and upper bounds but unknown distribution information. The model provides flexibility for PCA and dischargers to specify their aspirations independently, as the membership parameters for different membership functions, specified for different imprecise goals are interval grey numbers in place of a deterministic real number. In the final solution optimal fractional removal levels of the pollutants are obtained in the form of interval grey numbers. This enhances the flexibility and applicability in decision-making, as the decision-maker gets a range of optimal solutions for fixing the final decision scheme considering technical and economic feasibility of the pollutant treatment levels. Application of the GFWLAM is illustrated with case study of the Tunga-Bhadra river system in India.
NASA Astrophysics Data System (ADS)
Bhardwaj, Manish; McCaughan, Leon; Olkhovets, Anatoli; Korotky, Steven K.
2006-12-01
We formulate an analytic framework for the restoration performance of path-based restoration schemes in planar mesh networks. We analyze various switch architectures and signaling schemes and model their total restoration interval. We also evaluate the network global expectation value of the time to restore a demand as a function of network parameters. We analyze a wide range of nominally capacity-optimal planar mesh networks and find our analytic model to be in good agreement with numerical simulation data.
Marital status and optimism score among breast cancer survivors.
Croft, Lindsay; Sorkin, John; Gallicchio, Lisa
2014-11-01
There are an increasing number of breast cancer survivors, but their psychosocial and supportive care needs are not well-understood. Recent work has found marital status, social support, and optimism to be associated with quality of life, but little research has been conducted to understand how these factors relate to one another. Survey data from 722 breast cancer survivors were analyzed to estimate the association between marital status and optimism score, as measured using the Life Orientation Test-Revised. Linear regression was used to estimate the relationship of marital status and optimism, controlling for potential confounding variables and assessing effect modification. The results showed that the association between marital status and optimism was modified by time since breast cancer diagnosis. Specifically, in those most recently diagnosed (within 5 years), married breast cancer survivors had a 1.50 higher mean optimism score than unmarried survivors (95 % confidence interval (CI) 0.37, 2.62; p = 0.009). The difference in optimism score by marital status was not present more than 5 years from breast cancer diagnosis. Findings suggest that among breast cancer survivors within 5 years since diagnosis, those who are married have higher optimism scores than their unmarried counterparts; this association was not observed among longer-term breast cancer survivors. Future research should examine whether the difference in optimism score among this subgroup of breast cancer survivors is clinically relevant.
Brett, Benjamin L; Smyk, Nathan; Solomon, Gary; Baughman, Brandon C; Schatz, Philip
2016-08-18
The ImPACT (Immediate Post-Concussion Assessment and Cognitive Testing) neurocognitive testing battery is a widely used tool used for the assessment and management of sports-related concussion. Research on the stability of ImPACT in high school athletes at a 1- and 2-year intervals have been inconsistent, requiring further investigation. We documented 1-, 2-, and 3-year test-retest reliability of repeated ImPACT baseline assessments in a sample of high school athletes, using multiple statistical methods for examining stability. A total of 1,510 high school athletes completed baseline cognitive testing using online ImPACT test battery at three time periods of approximately 1- (N = 250), 2- (N = 1146), and 3-year (N = 114) intervals. No participant sustained a concussion between assessments. Intraclass correlation coefficients (ICCs) ranged in composite scores from 0.36 to 0.90 and showed little change as intervals between assessments increased. Reliable change indices and regression-based measures (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the various time intervals, with very few cases (0%-6%) falling outside of 95% confidence intervals. The results suggest ImPACT composites scores remain considerably stability across 1-, 2-, and 3-year test-retest intervals in high school athletes, when considering both ICCs and RBM. Annually ascertaining baseline scores continues to be optimal for ensuring accurate and individualized management of injury for concussed athletes. For instances in which more recent baselines are not available (1-2 years), clinicians should seek to utilize more conservative range estimates in determining the presence of clinically meaningful change in cognitive performance. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Demany, Laurent; Montandon, Gaspard; Semal, Catherine
2003-04-01
A listener's ability to compare two sounds separated by a silent time interval T is limited by a sum of ``sensory noise'' and ``memory noise.'' The present work was intended to test a model according to which these two components of internal noise are independent and, for a given sensory continuum, the memory noise depends only on T. In three experiments using brief sounds (<80 ms), pitch discrimination performances were measured in terms of d' as a function of T (0.1-4 s) and a physical parameter affecting the amount of sensory noise (pitch salience). As T increased, d' first increased rapidly and then declined more slowly. According to the tested model, the relative decline of d' beyond the optimal value of T should have been slower when pitch salience was low (large amount of sensory noise) than when pitch salience was high (small amount of sensory noise). However, this prediction was disproved in each of the three experiments. It was also found, when a ``roving'' procedure was used, that the optimal value of T was markedly shorter for very brief tone bursts (6 sine cycles) than for longer tone bursts (30 sine cycles).
NASA Astrophysics Data System (ADS)
Wang, Fengwen
2018-05-01
This paper presents a systematic approach for designing 3D auxetic lattice materials, which exhibit constant negative Poisson's ratios over large strain intervals. A unit cell model mimicking tensile tests is established and based on the proposed model, the secant Poisson's ratio is defined as the negative ratio between the lateral and the longitudinal engineering strains. The optimization problem for designing a material unit cell with a target Poisson's ratio is formulated to minimize the average lateral engineering stresses under the prescribed deformations. Numerical results demonstrate that 3D auxetic lattice materials with constant Poisson's ratios can be achieved by the proposed optimization formulation and that two sets of material architectures are obtained by imposing different symmetry on the unit cell. Moreover, inspired by the topology-optimized material architecture, a subsequent shape optimization is proposed by parametrizing material architectures using super-ellipsoids. By designing two geometrical parameters, simple optimized material microstructures with different target Poisson's ratios are obtained. By interpolating these two parameters as polynomial functions of Poisson's ratios, material architectures for any Poisson's ratio in the interval of ν ∈ [ - 0.78 , 0.00 ] are explicitly presented. Numerical evaluations show that interpolated auxetic lattice materials exhibit constant Poisson's ratios in the target strain interval of [0.00, 0.20] and that 3D auxetic lattice material architectures with programmable Poisson's ratio are achievable.
Computer assisted thermal-vacuum testing
NASA Technical Reports Server (NTRS)
Petrie, W.; Mikk, G.
1977-01-01
In testing complex systems and components under dynamic thermal-vacuum environments, it is desirable to optimize the environment control sequence in order to reduce test duration and cost. This paper describes an approach where a computer is utilized as part of the test control operation. Real time test data is made available to the computer through time-sharing terminals at appropriate time intervals. A mathematical model of the test article and environmental control equipment is then operated on using the real time data to yield current thermal status, temperature analysis, trend prediction and recommended thermal control setting changes to arrive at the required thermal condition. The data acquisition interface and the time-sharing hook-up to an IBM-370 computer is described along with a typical control program and data demonstrating its use.
Self-organization leads to supraoptimal performance in public transportation systems.
Gershenson, Carlos
2011-01-01
The performance of public transportation systems affects a large part of the population. Current theory assumes that passengers are served optimally when vehicles arrive at stations with regular intervals. In this paper, it is shown that self-organization can improve the performance of public transportation systems beyond the theoretical optimum by responding adaptively to local conditions. This is possible because of a "slower-is-faster" effect, where passengers wait more time at stations but total travel times are reduced. The proposed self-organizing method uses "antipheromones" to regulate headways, which are inspired by the stigmergy (communication via environment) of some ant colonies.
Balasubramonian, Rajeev [Sandy, UT; Dwarkadas, Sandhya [Rochester, NY; Albonesi, David [Ithaca, NY
2009-02-10
In a processor having multiple clusters which operate in parallel, the number of clusters in use can be varied dynamically. At the start of each program phase, the configuration option for an interval is run to determine the optimal configuration, which is used until the next phase change is detected. The optimum instruction interval is determined by starting with a minimum interval and doubling it until a low stability factor is reached.
Early photosensitizer uptake kinetics predict optimum drug-light interval for photodynamic therapy
NASA Astrophysics Data System (ADS)
Sinha, Lagnojita; Elliott, Jonathan T.; Hasan, Tayyaba; Pogue, Brian W.; Samkoe, Kimberley S.; Tichauer, Kenneth M.
2015-03-01
Photodynamic therapy (PDT) has shown promising results in targeted treatment of cancerous cells by developing localized toxicity with the help of light induced generation of reactive molecular species. The efficiency of this therapy depends on the product of the intensity of light dose and the concentration of photosensitizer (PS) in the region of interest (ROI). On account of this, the dynamic and variable nature of PS delivery and retention depends on many physiological factors that are known to be heterogeneous within and amongst tumors (e.g., blood flow, blood volume, vascular permeability, and lymph drainage rate). This presents a major challenge with respect to how the optimal time and interval of light delivery is chosen, which ideally would be when the concentration of PS molecule is at its maximum in the ROI. In this paper, a predictive algorithm is developed that takes into consideration the variability and dynamic nature of PS distribution in the body on a region-by-region basis and provides an estimate of the optimum time when the PS concentration will be maximum in the ROI. The advantage of the algorithm lies in the fact that it predicts the time in advance as it takes only a sample of initial data points (~12 min) as input. The optimum time calculated using the algorithm estimated a maximum dose that was only 0.58 +/- 1.92% under the true maximum dose compared to a mean dose error of 39.85 +/- 6.45% if a 1 h optimal light deliver time was assumed for patients with different efflux rate constants of the PS, assuming they have the same plasma function. Therefore, if the uptake values of PS for the blood and the ROI is known for only first 12 minutes, the entire curve along with the optimum time of light radiation can be predicted with the help of this algorithm.
Akgöz, Ayça; Akata, Deniz; Hazırolan, Tuncay; Karçaaltıncaba, Muşturay
2014-01-01
PURPOSE We aimed to evaluate the visibility of coronary arteries and bypass-grafts in patients who underwent dual source computed tomography (DSCT) angiography without heart rate (HR) control and to determine optimal intervals for image reconstruction. MATERIALS AND METHODS A total of 285 consecutive cases who underwent coronary (n=255) and bypass-graft (n=30) DSCT angiography at our institution were identified retrospectively. Patients with atrial fibrillation were excluded. Ten datasets in 10% increments were reconstructed in all patients. On each dataset, the visibility of coronary arteries was evaluated using the 15-segment American Heart Association classification by two radiologists in consensus. RESULTS Mean HR was 76±16.3 bpm, (range, 46–127 bpm). All coronary segments could be visualized in 277 patients (97.19%). On a segment-basis, 4265 of 4275 (99.77%) coronary artery segments were visible. All segments of 56 bypass-grafts in 30 patients were visible (100%). Total mean segment visibility scores of all coronary arteries were highest at 70%, 40%, and 30% intervals for all HRs. The optimal reconstruction intervals to visualize the segments of all three coronary arteries in descending order were 70%, 60%, 80%, and 30% intervals in patients with a mean HR <70 bpm; 40%, 70%, and 30% intervals in patients with a mean HR 70–100 bpm; and 40%, 50%, and 30% in patients with a mean HR >100 bpm. CONCLUSION Without beta-blocker administration, DSCT coronary angiography offers excellent visibility of vascular segments using both end-systolic and mid-late diastolic reconstructions at HRs up to 100 bpm, and only end-systolic reconstructions at HRs over 100 bpm. PMID:24834490
Bye, Robin T; Neilson, Peter D
2010-10-01
Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.
Bénard, Florence; Barkun, Alan N; Martel, Myriam; von Renteln, Daniel
2018-01-07
To summarize and compare worldwide colorectal cancer (CRC) screening recommendations in order to identify similarities and disparities. A systematic literature search was performed using MEDLINE, EMBASE, Scopus, CENTRAL and ISI Web of knowledge identifying all average-risk CRC screening guideline publications within the last ten years and/or position statements published in the last 2 years. In addition, a hand-search of the webpages of National Gastroenterology Society websites, the National Guideline Clearinghouse, the BMJ Clinical Evidence website, Google and Google Scholar was performed. Fifteen guidelines were identified. Six guidelines were published in North America, four in Europe, four in Asia and one from the World Gastroenterology Organization. The majority of guidelines recommend screening average-risk individuals between ages 50 and 75 using colonoscopy (every 10 years), or flexible sigmoidoscopy (FS, every 5 years) or fecal occult blood test (FOBT, mainly the Fecal Immunochemical Test, annually or biennially). Disparities throughout the different guidelines are found relating to the use of colonoscopy, rank order between test, screening intervals and optimal age ranges for screening. Average risk individuals between 50 and 75 years should undergo CRC screening. Recommendations for optimal surveillance intervals, preferred tests/test cascade as well as the optimal timing when to start and stop screening differ regionally and should be considered for clinical decision making. Furthermore, local resource availability and patient preferences are important to increase CRC screening uptake, as any screening is better than none.
de Azambuja, Evandro; Bradbury, Ian; Saini, Kamal S.; Bines, José; Simon, Sergio D.; Dooren, Veerle Van; Aktan, Gursel; Pritchard, Kathleen I.; Wolff, Antonio C.; Smith, Ian; Jackisch, Christian; Lang, Istvan; Untch, Michael; Boyle, Frances; Xu, Binghe; Baselga, Jose; Perez, Edith A.; Piccart-Gebhart, Martine
2013-01-01
Purpose. This study measured the time taken for setting up the different facets of Adjuvant Lapatinib and/or Trastuzumab Treatment Optimization (ALTTO), an international phase III study being conducted in 44 participating countries. Methods. Time to regulatory authority (RA) approval, time to ethics committee/institutional review board (EC/IRB) approval, time from study approval by EC/IRB to first randomized patient, and time from first to last randomized patient were prospectively collected in the ALTTO study. Analyses were conducted by grouping countries into either geographic regions or economic classes as per the World Bank's criteria. Results. South America had a significantly longer time to RA approval (median: 236 days, range: 21–257 days) than Europe (median: 52 days, range: 0–151 days), North America (median: 26 days, range: 22–30 days), and Asia-Pacific (median: 62 days, range: 37–75 days). Upper-middle economies had longer times to RA approval (median: 123 days, range: 21–257 days) than high-income (median: 47 days, range: 0–112 days) and lower-middle income economies (median: 57 days, range: 37–62 days). No significant difference was observed for time to EC/IRB approval across the studied regions (median: 59 days, range 0–174 days). Overall, the median time from EC/IRB approval to first recruited patient was 169 days (range: 26–412 days). Conclusion. This study highlights the long time intervals required to activate a global phase III trial. Collaborative research groups, pharmaceutical industry sponsors, and regulatory authorities should analyze the current system and enter into dialogue for optimizing local policies. This would enable faster access of patients to innovative therapies and enhance the efficiency of clinical research. PMID:23359433
Formulation and evaluation of flurbiprofen microemulsion.
Ambade, K W; Jadhav, S L; Gambhire, M N; Kurmi, S D; Kadam, V J; Jadhav, K R
2008-01-01
The purpose of the present study was to investigate the microemulsion formulations for topical delivery of Flurbiprofen (FP) in order to by pass its gastrointestinal adverse effects. The pseudoternary phase diagrams were developed and various microemulsion formulations were prepared using Isopropyl Myristate (IPM), Ethyl Oleate (EO) as oils, Aerosol OT as surfactant and Sorbitan Monooleate as cosurfactant. The transdermal permeability of flurbiprofen from microemulsions containing IPM and EO as two different oil phases was analyzed using Keshary-Chien diffusion cell through excised rat skin. Flurbiprofen showed higher in vitro permeation from IPM as compared to that of from EO microemulsion. Thus microemulsion containing IPM as oil phase were selected for optimization. The optimization was carried out using 2(3) factorial design. The optimized formula was then subjected to in vivo anti-inflammatory study and the performance of flurbiprofen from optimized formulation was compared with that of gel cream. Flurbiprofen from optimized microemulsion formulation was found to be more effective as compared to gel cream in inhibiting the carrageenan induced rat paw edema at all time intervals. Histopathological investigation of rat skin revealed the safety of microemulsion formulation for topical use. Thus the present study indicates that, microemulsion can be a promising vehicle for the topical delivery of flurbiprofen.
Electrocardiographically gated 16-section CT of the thorax: cardiac motion suppression.
Hofmann, Lars K; Zou, Kelly H; Costello, Philip; Schoepf, U Joseph
2004-12-01
Thirty patients underwent 16-section multi-detector row computed tomographic (CT) angiography of the thorax with retrospective electrocardiographic gating. Institutional review board approval was obtained for retrospective analysis of CT scan data and records; patient informed consent was not required. Images reconstructed at six different time points (0%, 20%, 40%, 50%, 60%, 80%) within the R-R interval on the electrocardiogram were analyzed by two radiologists for diagnostic quality, to identify suitable reconstruction intervals for optimal suppression of cardiac motion. Five regions of interest (left coronary artery, aortic root, ascending and descending aorta, pulmonary arteries) were evaluated. Best image quality was achieved by referencing image reconstruction to middiastole (50%-60%) for the left coronary artery, aortic root, and ascending aorta. The pulmonary arteries are best displayed during mid- to late diastole (80%). (c) RSNA, 2004
Said, Mayada; Elsayed, Ibrahim; Aboelwafa, Ahmed A; Elshafeey, Ahmed H
2018-06-18
Agomelatine suffers from extensive inactivation through 1 st pass effect with a limited oral bioavailability (5%). The aim of this study was to formulate and optimize liquid nanocrystals (LNC) containing agomelatine to enhance the transdermal permeation of the drug. The independent factors of the employed Box-Behnken design were the Pluronic F127, deoxycholic acid sodium salt and propylene glycol percentages. On the other hand, particle size, polydispersity index, zeta potential, entrapment efficiency, cumulative amount permeated at certain time intervals and permeation enhancement ratio were considered as dependent responses. The optimized formulation was composed of 1.5% Pluronic F127 and 1.5% deoxycholic acid sodium salt and it was found to have significantly higher AUC 0-24h , AUC 0-∞ and elimination t 1/2 than that of the employed reference indicating the enhancement of the drug permeation. The obtained findings indicated the ability of the optimized LNC formulation to improve the drug bioavailability after its transdermal application. Copyright © 2018 Elsevier B.V. All rights reserved.
Qiao, Wenjun; Tang, Xiaoqi; Zheng, Shiqi; Xie, Yuanlong; Song, Bao
2016-09-01
In this paper, an adaptive two-degree-of-freedom (2Dof) proportional-integral (PI) controller is proposed for the speed control of permanent magnet synchronous motor (PMSM). Firstly, an enhanced just-in-time learning technique consisting of two novel searching engines is presented to identify the model of the speed control system in a real-time manner. Secondly, a general formula is given to predict the future speed reference which is unavailable at the interval of two bus-communication cycles. Thirdly, the fractional order generalized predictive control (FOGPC) is introduced to improve the control performance of the servo drive system. Based on the identified model parameters and predicted speed reference, the optimal control law of FOGPC is derived. Finally, the designed 2Dof PI controller is auto-tuned by matching with the optimal control law. Simulations and real-time experimental results on the servo drive system of PMSM are provided to illustrate the effectiveness of the proposed strategy. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Microhard MHX2420 Orbital Performance Evaluation Using RT Logic T400CS
NASA Technical Reports Server (NTRS)
TintoreGazulla, Oriol; Lombardi, Mark
2012-01-01
RT Logic allows simulation of Ground Station - satellite communications: Static tests have been successful. Dynamic tests have been performed for simple passes. Future dynamic tests are needed to simulate real orbit communications. Satellite attitude changes antenna gain. Atmospheric and rain losses need to be added. STK Plug-in will be the next step to improve the dynamic tests. There is a possibility of running longer simulations. Simulation of different losses available in the STK Plug-in. Microhard optimization: Effect of Microhard settings on the data throughput have been understood. Optimized settings improve data throughput for LEO communications. Longer hop intervals make transfer of larger packets more efficient (more time between hops in frequency). Use of FEC (Reed-Solomon) reduces the number of retransmissions for long-range or noisy communications.
Mist Interval and Hormone Concentration Influence Rooting of Florida and Piedmont Azalea
USDA-ARS?s Scientific Manuscript database
Native azalea (Rhododendron spp.) vegetative propagation information is limited. The objective of this experiment is to determine optimal levels of K-IBA and mist intervals for propagation of Florida azalea (Rhododendron austrinum) and Piedmont azalea (Rhododendron canescens). Florida azalea roote...
Error propagation of partial least squares for parameters optimization in NIR modeling.
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-05
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models. Copyright © 2017. Published by Elsevier B.V.
Error propagation of partial least squares for parameters optimization in NIR modeling
NASA Astrophysics Data System (ADS)
Du, Chenzhao; Dai, Shengyun; Qiao, Yanjiang; Wu, Zhisheng
2018-03-01
A novel methodology is proposed to determine the error propagation of partial least-square (PLS) for parameters optimization in near-infrared (NIR) modeling. The parameters include spectral pretreatment, latent variables and variable selection. In this paper, an open source dataset (corn) and a complicated dataset (Gardenia) were used to establish PLS models under different modeling parameters. And error propagation of modeling parameters for water quantity in corn and geniposide quantity in Gardenia were presented by both type І and type II error. For example, when variable importance in the projection (VIP), interval partial least square (iPLS) and backward interval partial least square (BiPLS) variable selection algorithms were used for geniposide in Gardenia, compared with synergy interval partial least squares (SiPLS), the error weight varied from 5% to 65%, 55% and 15%. The results demonstrated how and what extent the different modeling parameters affect error propagation of PLS for parameters optimization in NIR modeling. The larger the error weight, the worse the model. Finally, our trials finished a powerful process in developing robust PLS models for corn and Gardenia under the optimal modeling parameters. Furthermore, it could provide a significant guidance for the selection of modeling parameters of other multivariate calibration models.
NASA Astrophysics Data System (ADS)
Lin, Yu-Fen; Chen, Yong-Song
2017-02-01
When a proton exchange membrane fuel cell (PEMFC) is operated with a dead-ended anode, impurities gradually accumulate within the anode, resulting in a performance drop. An anode purge is thereby ultimately required to remove impurities within the anode. A purge strategy comprises purge interval (valve closed) and purge duration (valve is open). A short purge interval causes frequent and unnecessary activation of the valve, whereas a long purge interval leads to excessive impurity accumulation. A short purge duration causes an incomplete performance recovery, whereas a long purge duration results in low hydrogen utilization. In this study, a series of experimental trials was conducted to simultaneously measure the hydrogen supply rate and power generation of a PEMFC at a frequency of 50 Hz for various operating current density levels and purge durations. The effect of purge duration on the cell's energy efficiency was subsequently analyzed and discussed. The results showed that the optimal purge duration for the PEMFC was approximately 0.2 s. Based on the results of this study, a methodical process for determining optimal purge durations was ultimately proposed for widespread application. Purging approximately one-fourth of anode gas can obtain optimal energy efficiency for a PEMFC with a dead-ended anode.
Azzi, Alain Joe; Shah, Karan; Seely, Andrew; Villeneuve, James Patrick; Sundaresan, Sudhir R; Shamji, Farid M; Maziak, Donna E; Gilbert, Sebastien
2016-05-01
Health care resources are costly and should be used judiciously and efficiently. Predicting the duration of surgical procedures is key to optimizing operating room resources. Our objective was to identify factors influencing operative time, particularly surgical team turnover. We performed a single-institution, retrospective review of lobectomy operations. Univariate and multivariate analyses were performed to evaluate the impact of different factors on surgical time (skin-to-skin) and total procedure time. Staff turnover within the nursing component of the surgical team was defined as the number of instances any nurse had to leave the operating room over the total number of nurses involved in the operation. A total of 235 lobectomies were performed by 5 surgeons, most commonly for lung cancer (95%). On multivariate analysis, percent forced expiratory volume in 1 second, surgical approach, and lesion size had a significant effect on surgical time. Nursing turnover was associated with a significant increase in surgical time (53.7 minutes; 95% confidence interval, 6.4-101; P = .026) and total procedure time (83.2 minutes; 95% confidence interval, 30.1-136.2; P = .002). Active management of surgical team turnover may be an opportunity to improve operating room efficiency when the surgical team is engaged in a major pulmonary resection. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
Eguchi, Kazuo; Kuruvilla, Sujith; Ogedegbe, Gbenga; Gerin, William; Schwartz, Joseph E; Pickering, Thomas G
2009-06-01
To clarify whether a shorter interval between three successive home blood pressure (HBP) readings (10 s vs. 1 min) taken twice a day gives a better prediction of the average 24-h BP and better patient compliance. We enrolled 56 patients from a hypertension clinic (mean age: 60 +/- 14 years; 54% female patients). The study consisted of three clinic visits, with two 4-week periods of self-monitoring of HBP between them, and a 24-h ambulatory BP monitoring at the second visit. Using a crossover design, with order randomized, the oscillometric HBP device (HEM-5001) could be programmed to take three consecutive readings at either 10-s or 1-min intervals, each of which was done for 4 weeks. Patients were asked to measure three HBP readings in the morning and evening. All the readings were stored in the memory of the monitors. The analyses were performed using the second-third HBP readings. The average systolic BP/diastolic BP for the 10-s and 1-min intervals at home were 136.1 +/- 15.8/77.5 +/- 9.5 and 133.2 +/- 15.5/76.9 +/- 9.3 mmHg (P = 0.001/0.19 for the differences in systolic BP and diastolic BP), respectively. The 1-min BP readings were significantly closer to the average of awake ambulatory BP (131 +/- 14/79 +/- 10 mmHg) than the 10-s interval readings. There was no significant difference in patients' compliance in taking adequate numbers of readings at the different time intervals. The 1-min interval between HBP readings gave a closer agreement with the daytime average BP than the 10-s interval.
Eguchi, Kazuo; Kuruvilla, Sujith; Ogedegbe, Gbenga; Gerin, William; Schwartz, Joseph E.; Pickering, Thomas G.
2010-01-01
Objectives To clarify whether a shorter interval between three successive home blood pressure (HBP) readings (10 s vs. 1 min) taken twice a day gives a better prediction of the average 24-h BP and better patient compliance. Design We enrolled 56 patients from a hypertension clinic (mean age: 60 ±14 years; 54% female patients). The study consisted of three clinic visits, with two 4-week periods of self-monitoring of HBP between them, and a 24-h ambulatory BP monitoring at the second visit. Using a crossover design, with order randomized, the oscillometric HBP device (HEM-5001) could be programmed to take three consecutive readings at either 10-s or 1-min intervals, each of which was done for 4 weeks. Patients were asked to measure three HBP readings in the morning and evening. All the readings were stored in the memory of the monitors. Results The analyses were performed using the second–third HBP readings. The average systolic BP/diastolic BP for the 10-s and 1-min intervals at home were 136.1 ±15.8/77.5 ±9.5 and 133.2 ±15.5/76.9 ±9.3 mmHg (P = 0.001/0.19 for the differences in systolic BP and diastolic BP), respectively. The 1-min BP readings were significantly closer to the average of awake ambulatory BP (131 ±14/79 ±10 mmHg) than the 10-s interval readings. There was no significant difference in patients’ compliance in taking adequate numbers of readings at the different time intervals. Conclusion The 1-min interval between HBP readings gave a closer agreement with the daytime average BP than the 10-s interval. PMID:19462492
Varma, Niraj; O'Donnell, David; Bassiouny, Mohammed; Ritter, Philippe; Pappone, Carlo; Mangual, Jan; Cantillon, Daniel; Badie, Nima; Thibault, Bernard; Wisnoskey, Brian
2018-02-06
QRS narrowing following cardiac resynchronization therapy with biventricular (BiV) or left ventricular (LV) pacing is likely affected by patient-specific conduction characteristics (PR, qLV, LV-paced propagation interval), making a universal programming strategy likely ineffective. We tested these factors using a novel, device-based algorithm (SyncAV) that automatically adjusts paced atrioventricular delay (default or programmable offset) according to intrinsic atrioventricular conduction. Seventy-five patients undergoing cardiac resynchronization therapy (age 66±11 years; 65% male; 32% with ischemic cardiomyopathy; LV ejection fraction 28±8%; QRS duration 162±16 ms) with intact atrioventricular conduction (PR interval 194±34, range 128-300 ms), left bundle branch block, and optimized LV lead position were studied at implant. QRS duration (QRSd) reduction was compared for the following pacing configurations: nominal simultaneous BiV (Mode I: paced/sensed atrioventricular delay=140/110 ms), BiV+SyncAV with 50 ms offset (Mode II), BiV+SyncAV with offset that minimized QRSd (Mode III), or LV-only pacing+SyncAV with 50 ms offset (Mode IV). The intrinsic QRSd (162±16 ms) was reduced to 142±17 ms (-11.8%) by Mode I, 136±14 ms (-15.6%) by Mode IV, and 132±13 ms (-17.8%) by Mode II. Mode III yielded the shortest overall QRSd (123±12 ms, -23.9% [ P <0.001 versus all modes]) and was the only configuration without QRSd prolongation in any patient. QRS narrowing occurred regardless of QRSd, PR, or LV-paced intervals, or underlying ischemic disease. Post-implant electrical optimization in already well-selected patients with left bundle branch block and optimized LV lead position is facilitated by patient-tailored BiV pacing adjusted to intrinsic atrioventricular timing using an automatic device-based algorithm. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Fabris, Enrico; van 't Hof, Arnoud; Hamm, Christian W; Lapostolle, Frédéric; Lassen, Jens F; Goodman, Shaun G; Ten Berg, Jurriën M; Bolognese, Leonardo; Cequier, Angel; Chettibi, Mohamed; Hammett, Christopher J; Huber, Kurt; Janzon, Magnus; Merkely, Béla; Storey, Robert F; Zeymer, Uwe; Cantor, Warren J; Tsatsaris, Anne; Kerneis, Mathieu; Diallo, Abdourahmane; Vicaut, Eric; Montalescot, Gilles
2017-08-01
In the ATLANTIC (Administration of Ticagrelor in the catheterization laboratory or in the Ambulance for New ST elevation myocardial Infarction to open the Coronary artery) trial the early use of aspirin, anticoagulation, and ticagrelor coupled with very short medical contact-to-balloon times represent good indicators of optimal treatment of ST-elevation myocardial infarction and an ideal setting to explore which factors may influence coronary reperfusion beyond a well-established pre-hospital system. This study sought to evaluate predictors of complete ST-segment resolution after percutaneous coronary intervention in ST-elevation myocardial infarction patients enrolled in the ATLANTIC trial. ST-segment analysis was performed on electrocardiograms recorded at the time of inclusion (pre-hospital electrocardiogram), and one hour after percutaneous coronary intervention (post-percutaneous coronary intervention electrocardiogram) by an independent core laboratory. Complete ST-segment resolution was defined as ≥70% ST-segment resolution. Complete ST-segment resolution occurred post-percutaneous coronary intervention in 54.9% ( n=800/1456) of patients and predicted lower 30-day composite major adverse cardiovascular and cerebrovascular events (odds ratio 0.35, 95% confidence interval 0.19-0.65; p<0.01), definite stent thrombosis (odds ratio 0.18, 95% confidence interval 0.02-0.88; p=0.03), and total mortality (odds ratio 0.43, 95% confidence interval 0.19-0.97; p=0.04). In multivariate analysis, independent negative predictors of complete ST-segment resolution were the time from symptoms to pre-hospital electrocardiogram (odds ratio 0.91, 95% confidence interval 0.85-0.98; p<0.01) and diabetes mellitus (odds ratio 0.6, 95% confidence interval 0.44-0.83; p<0.01); pre-hospital ticagrelor treatment showed a favorable trend for complete ST-segment resolution (odds ratio 1.22, 95% confidence interval 0.99-1.51; p=0.06). This study confirmed that post-percutaneous coronary intervention complete ST-segment resolution is a valid surrogate marker for cardiovascular clinical outcomes. In the current era of ST-elevation myocardial infarction reperfusion, patients' delay and diabetes mellitus are independent predictors of poor reperfusion and need specific attention in the future.
NASA Astrophysics Data System (ADS)
Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon
2016-06-01
One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.
The cerebellum predicts the temporal consequences of observed motor acts.
Avanzino, Laura; Bove, Marco; Pelosin, Elisa; Ogliastro, Carla; Lagravinese, Giovanna; Martino, Davide
2015-01-01
It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. The ability to extract temporal patterns and regularity of events is referred as temporal expectation. Temporal expectation activates the same cerebral network usually engaged in action selection, comprising cerebellum. However, it is unclear whether the cerebellum is directly involved in temporal expectation, when timing information is processed to make predictions on the outcome of a motor act. Healthy volunteers received one session of either active (inhibitory, 1 Hz) or sham repetitive transcranial magnetic stimulation covering the right lateral cerebellum prior the execution of a temporal expectation task. Subjects were asked to predict the end of a visually perceived human body motion (right hand handwriting) and of an inanimate object motion (a moving circle reaching a target). Videos representing movements were shown in full; the actual tasks consisted of watching the same videos, but interrupted after a variable interval from its onset by a dark interval of variable duration. During the 'dark' interval, subjects were asked to indicate when the movement represented in the video reached its end by clicking on the spacebar of the keyboard. Performance on the timing task was analyzed measuring the absolute value of timing error, the coefficient of variability and the percentage of anticipation responses. The active group exhibited greater absolute timing error compared with the sham group only in the human body motion task. Our findings suggest that the cerebellum is engaged in cognitive and perceptual domains that are strictly connected to motor control.
Multi-Residential Activity Labelling in Smart Homes with Wearable Tags Using BLE Technology
Mokhtari, Ghassem; Zhang, Qing; Karunanithi, Mohanraj
2018-01-01
Smart home platforms show promising outcomes to provide a better quality of life for residents in their homes. One of the main challenges that exists with these platforms in multi-residential houses is activity labeling. As most of the activity sensors do not provide any information regarding the identity of the person who triggers them, it is difficult to label the sensor events in multi-residential smart homes. To deal with this challenge, individual localization in different areas can be a promising solution. The localization information can be used to automatically label the activity sensor data to individuals. Bluetooth low energy (BLE) is a promising technology for this application due to how easy it is to implement and its low energy footprint. In this approach, individuals wear a tag that broadcasts its unique identity (ID) in certain time intervals, while fixed scanners listen to the broadcasting packet to localize the tag and the individual. However, the localization accuracy of this method depends greatly on different settings of broadcasting signal strength, and the time interval of BLE tags. To achieve the best localization accuracy, this paper studies the impacts of different advertising time intervals and power levels, and proposes an efficient and applicable algorithm to select optimal value settings of BLE sensors. Moreover, it proposes an automatic activity labeling method, through integrating BLE localization information and ambient sensor data. The applicability and effectiveness of the proposed structure is also demonstrated in a real multi-resident smart home scenario. PMID:29562666
Multi-Residential Activity Labelling in Smart Homes with Wearable Tags Using BLE Technology.
Mokhtari, Ghassem; Anvari-Moghaddam, Amjad; Zhang, Qing; Karunanithi, Mohanraj
2018-03-19
Smart home platforms show promising outcomes to provide a better quality of life for residents in their homes. One of the main challenges that exists with these platforms in multi-residential houses is activity labeling. As most of the activity sensors do not provide any information regarding the identity of the person who triggers them, it is difficult to label the sensor events in multi-residential smart homes. To deal with this challenge, individual localization in different areas can be a promising solution. The localization information can be used to automatically label the activity sensor data to individuals. Bluetooth low energy (BLE) is a promising technology for this application due to how easy it is to implement and its low energy footprint. In this approach, individuals wear a tag that broadcasts its unique identity (ID) in certain time intervals, while fixed scanners listen to the broadcasting packet to localize the tag and the individual. However, the localization accuracy of this method depends greatly on different settings of broadcasting signal strength, and the time interval of BLE tags. To achieve the best localization accuracy, this paper studies the impacts of different advertising time intervals and power levels, and proposes an efficient and applicable algorithm to select optimal value settings of BLE sensors. Moreover, it proposes an automatic activity labeling method, through integrating BLE localization information and ambient sensor data. The applicability and effectiveness of the proposed structure is also demonstrated in a real multi-resident smart home scenario.
Cost-Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2017-12-27
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared to the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon microsimulation model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess whether optimal screening strategies would change. Colonoscopy every 5 years, starting at age 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in population is not clear. Using a Microsimulation Screening Analysis-Colon microsimulation model, we found screening of patients with cystic fibrosis for CRC to be cost-effective. Due to the higher risk in these patients for CRC, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.
Cost Effectiveness of Screening Individuals With Cystic Fibrosis for Colorectal Cancer.
Gini, Andrea; Zauber, Ann G; Cenin, Dayna R; Omidvari, Amir-Houshang; Hempstead, Sarah E; Fink, Aliza K; Lowenfels, Albert B; Lansdorp-Vogelaar, Iris
2018-02-01
Individuals with cystic fibrosis are at increased risk of colorectal cancer (CRC) compared with the general population, and risk is higher among those who received an organ transplant. We performed a cost-effectiveness analysis to determine optimal CRC screening strategies for patients with cystic fibrosis. We adjusted the existing Microsimulation Screening Analysis-Colon model to reflect increased CRC risk and lower life expectancy in patients with cystic fibrosis. Modeling was performed separately for individuals who never received an organ transplant and patients who had received an organ transplant. We modeled 76 colonoscopy screening strategies that varied the age range and screening interval. The optimal screening strategy was determined based on a willingness to pay threshold of $100,000 per life-year gained. Sensitivity and supplementary analyses were performed, including fecal immunochemical test (FIT) as an alternative test, earlier ages of transplantation, and increased rates of colonoscopy complications, to assess if optimal screening strategies would change. Colonoscopy every 5 years, starting at an age of 40 years, was the optimal colonoscopy strategy for patients with cystic fibrosis who never received an organ transplant; this strategy prevented 79% of deaths from CRC. Among patients with cystic fibrosis who had received an organ transplant, optimal colonoscopy screening should start at an age of 30 or 35 years, depending on the patient's age at time of transplantation. Annual FIT screening was predicted to be cost-effective for patients with cystic fibrosis. However, the level of accuracy of the FIT in this population is not clear. Using a Microsimulation Screening Analysis-Colon model, we found screening of patients with cystic fibrosis for CRC to be cost effective. Because of the higher risk of CRC in these patients, screening should start at an earlier age with a shorter screening interval. The findings of this study (especially those on FIT screening) may be limited by restricted evidence available for patients with cystic fibrosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.
Martin, David O; Lemke, Bernd; Birnie, David; Krum, Henry; Lee, Kathy Lai-Fun; Aonuma, Kazutaka; Gasparini, Maurizio; Starling, Randall C; Milasinovic, Goran; Rogers, Tyson; Sambelashvili, Alex; Gorcsan, John; Houmsse, Mahmoud
2012-11-01
In patients with sinus rhythm and normal atrioventricular conduction, pacing only the left ventricle with appropriate atrioventricular delays can result in superior left ventricular and right ventricular function compared with standard biventricular (BiV) pacing. To evaluate a novel adaptive cardiac resynchronization therapy ((aCRT) algorithm for CRT pacing that provides automatic ambulatory selection between synchronized left ventricular or BiV pacing with dynamic optimization of atrioventricular and interventricular delays. Patients (n = 522) indicated for a CRT-defibrillator were randomized to aCRT vs echo-optimized BiV pacing (Echo) in a 2:1 ratio and followed at 1-, 3-, and 6-month postrandomization. The study met all 3 noninferiority primary objectives: (1) the percentage of aCRT patients who improved in their clinical composite score at 6 months was at least as high in the aCRT arm as in the Echo arm (73.6% vs 72.5%, with a noninferiority margin of 12%; P = .0007); (2) aCRT and echo-optimized settings resulted in similar cardiac performance, as demonstrated by a high concordance correlation coefficient between aortic velocity time integrals at aCRT and Echo settings at randomization (concordance correlation coefficient = 0.93; 95% confidence interval 0.91-0.94) and at 6-month postrandomization (concordance correlation coefficient = 0.90; 95% confidence interval 0.87-0.92); and (3) aCRT did not result in inappropriate device settings. There were no significant differences between the arms with respect to heart failure events or ventricular arrhythmia episodes. Secondary end points showed similar benefit, and right-ventricular pacing was reduced by 44% in the aCRT arm. The aCRT algorithm is safe and at least as effective as BiV pacing with comprehensive echocardiographic optimization. Copyright © 2012 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
Proposed algorithm to improve job shop production scheduling using ant colony optimization method
NASA Astrophysics Data System (ADS)
Pakpahan, Eka KA; Kristina, Sonna; Setiawan, Ari
2017-12-01
This paper deals with the determination of job shop production schedule on an automatic environment. On this particular environment, machines and material handling system are integrated and controlled by a computer center where schedule were created and then used to dictate the movement of parts and the operations at each machine. This setting is usually designed to have an unmanned production process for a specified interval time. We consider here parts with various operations requirement. Each operation requires specific cutting tools. These parts are to be scheduled on machines each having identical capability, meaning that each machine is equipped with a similar set of cutting tools therefore is capable of processing any operation. The availability of a particular machine to process a particular operation is determined by the remaining life time of its cutting tools. We proposed an algorithm based on the ant colony optimization method and embedded them on matlab software to generate production schedule which minimize the total processing time of the parts (makespan). We test the algorithm on data provided by real industry and the process shows a very short computation time. This contributes a lot to the flexibility and timelines targeted on an automatic environment.
Gartner, Daniel; Zhang, Yiye; Padman, Rema
2018-06-01
Order sets are a critical component in hospital information systems that are expected to substantially reduce physicians' physical and cognitive workload and improve patient safety. Order sets represent time interval-clustered order items, such as medications prescribed at hospital admission, that are administered to patients during their hospital stay. In this paper, we develop a mathematical programming model and an exact and a heuristic solution procedure with the objective of minimizing physicians' cognitive workload associated with prescribing order sets. Furthermore, we provide structural insights into the problem which lead us to a valid lower bound on the order set size. In a case study using order data on Asthma patients with moderate complexity from a major pediatric hospital, we compare the hospital's current solution with the exact and heuristic solutions on a variety of performance metrics. Our computational results confirm our lower bound and reveal that using a time interval decomposition approach substantially reduces computation times for the mathematical program, as does a K -means clustering based decomposition approach which, however, does not guarantee optimality because it violates the lower bound. The results of comparing the mathematical program with the current order set configuration in the hospital indicates that cognitive workload can be reduced by about 20.2% by allowing 1 to 5 order sets, respectively. The comparison of the K -means based decomposition with the hospital's current configuration reveals a cognitive workload reduction of about 19.5%, also by allowing 1 to 5 order sets, respectively. We finally provide a decision support system to help practitioners analyze the current order set configuration, the results of the mathematical program and the heuristic approach.
Herscovici, Sarah; Pe'er, Avivit; Papyan, Surik; Lavie, Peretz
2007-02-01
Scoring of REM sleep based on polysomnographic recordings is a laborious and time-consuming process. The growing number of ambulatory devices designed for cost-effective home-based diagnostic sleep recordings necessitates the development of a reliable automatic REM sleep detection algorithm that is not based on the traditional electroencephalographic, electrooccolographic and electromyographic recordings trio. This paper presents an automatic REM detection algorithm based on the peripheral arterial tone (PAT) signal and actigraphy which are recorded with an ambulatory wrist-worn device (Watch-PAT100). The PAT signal is a measure of the pulsatile volume changes at the finger tip reflecting sympathetic tone variations. The algorithm was developed using a training set of 30 patients recorded simultaneously with polysomnography and Watch-PAT100. Sleep records were divided into 5 min intervals and two time series were constructed from the PAT amplitudes and PAT-derived inter-pulse periods in each interval. A prediction function based on 16 features extracted from the above time series that determines the likelihood of detecting a REM epoch was developed. The coefficients of the prediction function were determined using a genetic algorithm (GA) optimizing process tuned to maximize a price function depending on the sensitivity, specificity and agreement of the algorithm in comparison with the gold standard of polysomnographic manual scoring. Based on a separate validation set of 30 patients overall sensitivity, specificity and agreement of the automatic algorithm to identify standard 30 s epochs of REM sleep were 78%, 92%, 89%, respectively. Deploying this REM detection algorithm in a wrist worn device could be very useful for unattended ambulatory sleep monitoring. The innovative method of optimization using a genetic algorithm has been proven to yield robust results in the validation set.
Performance Enhancing Diets and the PRISE Protocol to Optimize Athletic Performance
Arciero, Paul J.; Ward, Emery
2015-01-01
The training regimens of modern-day athletes have evolved from the sole emphasis on a single fitness component (e.g., endurance athlete or resistance/strength athlete) to an integrative, multimode approach encompassing all four of the major fitness components: resistance (R), interval sprints (I), stretching (S), and endurance (E) training. Athletes rarely, if ever, focus their training on only one mode of exercise but instead routinely engage in a multimode training program. In addition, timed-daily protein (P) intake has become a hallmark for all athletes. Recent studies, including from our laboratory, have validated the effectiveness of this multimode paradigm (RISE) and protein-feeding regimen, which we have collectively termed PRISE. Unfortunately, sports nutrition recommendations and guidelines have lagged behind the PRISE integrative nutrition and training model and therefore limit an athletes' ability to succeed. Thus, it is the purpose of this review to provide a clearly defined roadmap linking specific performance enhancing diets (PEDs) with each PRISE component to facilitate optimal nourishment and ultimately optimal athletic performance. PMID:25949823
Campbell, J P; Gratton, M C; Salomone, J A; Lindholm, D J; Watson, W A
1994-01-01
In some emergency medical services (EMS) system designs, response time intervals are mandated with monetary penalties for noncompliance. These times are set with the goal of providing rapid, definitive patient care. The time interval of vehicle at scene-to-patient access (VSPA) has been measured, but its effect on response time interval compliance has not been determined. To determine the effect of the VSPA interval on the mandated code 1 (< 9 min) and code 2 (< 13 min) response time interval compliance in an urban, public-utility model system. A prospective, observational study used independent third-party riders to collect the VSPA interval for emergency life-threatening (code 1) and emergency nonlife-threatening (code 2) calls. The VSPA interval was added to the 9-1-1 call-to-dispatch and vehicle dispatch-to-scene intervals to determine the total time interval from call received until paramedic access to the patient (9-1-1 call-to-patient access). Compliance with the mandated response time intervals was determined using the traditional time intervals (9-1-1 call-to-scene) plus the VSPA time intervals (9-1-1 call-to-patient access). Chi-square was used to determine statistical significance. Of the 216 observed calls, 198 were matched to the traditional time intervals. Sixty-three were code 1, and 135 were code 2. Of the code 1 calls, 90.5% were compliant using 9-1-1 call-to-scene intervals dropping to 63.5% using 9-1-1 call-to-patient access intervals (p < 0.0005). Of the code 2 calls, 94.1% were compliant using 9-1-1 call-to-scene intervals. Compliance decreased to 83.7% using 9-1-1 call-to-patient access intervals (p = 0.012). The addition of the VSPA interval to the traditional time intervals impacts system response time compliance. Using 9-1-1 call-to-scene compliance as a basis for measuring system performance underestimates the time for the delivery of definitive care. This must be considered when response time interval compliances are defined.
Nelson, Winnie W; Wang, Li; Baser, Onur; Damaraju, Chandrasekharrao V; Schein, Jeffrey R
2015-02-01
Although efficacious in stroke prevention in non-valvular atrial fibrillation, many warfarin patients are sub-optimally managed. To evaluate the association of international normalized ratio control and clinical outcomes among new warfarin patients with non-valvular atrial fibrillation. Adult non-valvular atrial fibrillation patients (≥18 years) initiating warfarin treatment were selected from the US Veterans Health Administration dataset between 10/2007 and 9/2012. Valid international normalized ratio values were examined from the warfarin initiation date through the earlier of the first clinical outcome, end of warfarin exposure or death. Each patient contributed multiple in-range and out-of-range time periods. The relative risk ratios of clinical outcomes associated with international normalized ratio control were estimated. 34,346 patients were included for analysis. During the warfarin exposure period, the incidence of events per 100 person-years was highest when patients had international normalized ratio <2:13.66 for acute coronary syndrome; 10.30 for ischemic stroke; 2.93 for transient ischemic attack; 1.81 for systemic embolism; and 4.55 for major bleeding. Poisson regression confirmed that during periods with international normalized ratio <2, patients were at increased risk of developing acute coronary syndrome (relative risk ratio: 7.9; 95 % confidence interval 6.9-9.1), ischemic stroke (relative risk ratio: 7.6; 95 % confidence interval 6.5-8.9), transient ischemic attack (relative risk ratio: 8.2; 95 % confidence interval 6.1-11.2), systemic embolism (relative risk ratio: 6.3; 95 % confidence interval 4.4-8.9) and major bleeding (relative risk ratio: 2.6; 95 % confidence interval 2.2-3.0). During time periods with international normalized ratio >3, patients had significantly increased risk of major bleeding (relative risk ratio: 1.5; 95 % confidence interval 1.2-2.0). In a Veterans Health Administration non-valvular atrial fibrillation population, exposure to out-of-range international normalized ratio values was associated with significantly increased risk of adverse clinical outcomes.
Li, Y P; Huang, G H
2010-09-15
Considerable public concerns have been raised in the past decades since a large amount of pollutant emissions from municipal solid waste (MSW) disposal of processes pose risks on surrounding environment and human health. Moreover, in MSW management, various uncertainties exist in the related costs, impact factors and objectives, which can affect the optimization processes and the decision schemes generated. In this study, an interval-based possibilistic programming (IBPP) method is developed for planning the MSW management with minimized system cost and environmental impact under uncertainty. The developed method can deal with uncertainties expressed as interval values and fuzzy sets in the left- and right-hand sides of constraints and objective function. An interactive algorithm is provided for solving the IBPP problem, which does not lead to more complicated intermediate submodels and has a relatively low computational requirement. The developed model is applied to a case study of planning a MSW management system, where mixed integer linear programming (MILP) technique is introduced into the IBPP framework to facilitate dynamic analysis for decisions of timing, sizing and siting in terms of capacity expansion for waste-management facilities. Three cases based on different waste-management policies are examined. The results obtained indicate that inclusion of environmental impacts in the optimization model can change the traditional waste-allocation pattern merely based on the economic-oriented planning approach. The results obtained can help identify desired alternatives for managing MSW, which has advantages in providing compromised schemes under an integrated consideration of economic efficiency and environmental impact under uncertainty. Copyright 2010 Elsevier B.V. All rights reserved.
Evaluating the efficiency of environmental monitoring programs
Levine, Carrie R.; Yanai, Ruth D.; Lampman, Gregory G.; Burns, Douglas A.; Driscoll, Charles T.; Lawrence, Gregory B.; Lynch, Jason; Schoch, Nina
2014-01-01
Statistical uncertainty analyses can be used to improve the efficiency of environmental monitoring, allowing sampling designs to maximize information gained relative to resources required for data collection and analysis. In this paper, we illustrate four methods of data analysis appropriate to four types of environmental monitoring designs. To analyze a long-term record from a single site, we applied a general linear model to weekly stream chemistry data at Biscuit Brook, NY, to simulate the effects of reducing sampling effort and to evaluate statistical confidence in the detection of change over time. To illustrate a detectable difference analysis, we analyzed a one-time survey of mercury concentrations in loon tissues in lakes in the Adirondack Park, NY, demonstrating the effects of sampling intensity on statistical power and the selection of a resampling interval. To illustrate a bootstrapping method, we analyzed the plot-level sampling intensity of forest inventory at the Hubbard Brook Experimental Forest, NH, to quantify the sampling regime needed to achieve a desired confidence interval. Finally, to analyze time-series data from multiple sites, we assessed the number of lakes and the number of samples per year needed to monitor change over time in Adirondack lake chemistry using a repeated-measures mixed-effects model. Evaluations of time series and synoptic long-term monitoring data can help determine whether sampling should be re-allocated in space or time to optimize the use of financial and human resources.
Femoral metastases from ovarian serous/endometroid adenocarcinoma
Beresford–Cleary, NJA; Mehdi, SA; Magowan, B
2012-01-01
Bony metastases from ovarian cancer are rare, tend to affect the axial skeleton and are associated with abdomino-pelvic disease. The median time interval between diagnosis of ovarian carcinoma and presentation of bony metastases is 44 months (1). We describe a rare case of high grade left ovarian serous / endometrioid adenocarcinoma presenting with a pathological right femoral fracture 4 weeks following diagnosis and optimal debulking of the ovarian tumour. Orthopaedic surgeons must be vigilant when planning treatment of fractures presenting in patients with a history of ovarian cancer. PMID:24960734
Fabrication of highly efficient ZnO nanoscintillators
NASA Astrophysics Data System (ADS)
Procházková, Lenka; Gbur, Tomáš; Čuba, Václav; Jarý, Vítězslav; Nikl, Martin
2015-09-01
Photo-induced synthesis of high-efficiency ultrafast nanoparticle scintillators of ZnO was demonstrated. Controlled doping with Ga(III) and La(III) ions together with the optimized method of ZnO synthesis and subsequent two-step annealing in air and under reducing atmosphere allow to achieve very high intensity of UV exciton luminescence, up to 750% of BGO intensity magnitude. Fabricated nanoparticles feature extremely short sub-nanosecond photoluminescence decay times. Temperature dependence of the photoluminescence spectrum within 8-340 K range was investigated and shows the absence of visible defect-related emission within all temperature intervals.
Juang, Chia-Feng; Hsu, Chia-Hung
2009-12-01
This paper proposes a new reinforcement-learning method using online rule generation and Q-value-aided ant colony optimization (ORGQACO) for fuzzy controller design. The fuzzy controller is based on an interval type-2 fuzzy system (IT2FS). The antecedent part in the designed IT2FS uses interval type-2 fuzzy sets to improve controller robustness to noise. There are initially no fuzzy rules in the IT2FS. The ORGQACO concurrently designs both the structure and parameters of an IT2FS. We propose an online interval type-2 rule generation method for the evolution of system structure and flexible partitioning of the input space. Consequent part parameters in an IT2FS are designed using Q -values and the reinforcement local-global ant colony optimization algorithm. This algorithm selects the consequent part from a set of candidate actions according to ant pheromone trails and Q-values, both of which are updated using reinforcement signals. The ORGQACO design method is applied to the following three control problems: 1) truck-backing control; 2) magnetic-levitation control; and 3) chaotic-system control. The ORGQACO is compared with other reinforcement-learning methods to verify its efficiency and effectiveness. Comparisons with type-1 fuzzy systems verify the noise robustness property of using an IT2FS.
Optimal control, investment and utilization schemes for energy storage under uncertainty
NASA Astrophysics Data System (ADS)
Mirhosseini, Niloufar Sadat
Energy storage has the potential to offer new means for added flexibility on the electricity systems. This flexibility can be used in a number of ways, including adding value towards asset management, power quality and reliability, integration of renewable resources and energy bill savings for the end users. However, uncertainty about system states and volatility in system dynamics can complicate the question of when to invest in energy storage and how best to manage and utilize it. This work proposes models to address different problems associated with energy storage within a microgrid, including optimal control, investment, and utilization. Electric load, renewable resources output, storage technology cost and electricity day-ahead and spot prices are the factors that bring uncertainty to the problem. A number of analytical methodologies have been adopted to develop the aforementioned models. Model Predictive Control and discretized dynamic programming, along with a new decomposition algorithm are used to develop optimal control schemes for energy storage for two different levels of renewable penetration. Real option theory and Monte Carlo simulation, coupled with an optimal control approach, are used to obtain optimal incremental investment decisions, considering multiple sources of uncertainty. Two stage stochastic programming is used to develop a novel and holistic methodology, including utilization of energy storage within a microgrid, in order to optimally interact with energy market. Energy storage can contribute in terms of value generation and risk reduction for the microgrid. The integration of the models developed here are the basis for a framework which extends from long term investments in storage capacity to short term operational control (charge/discharge) of storage within a microgrid. In particular, the following practical goals are achieved: (i) optimal investment on storage capacity over time to maximize savings during normal and emergency operations; (ii) optimal market strategy of buy and sell over 24-hour periods; (iii) optimal storage charge and discharge in much shorter time intervals.
Optimal maintenance of a multi-unit system under dependencies
NASA Astrophysics Data System (ADS)
Sung, Ho-Joon
The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the same end-goals as Reliability Centered Maintenance (RCM). RCM was first introduced to the aircraft industry in an attempt to bridge the gap between the empirically-driven and theory-driven approaches to establishing optimal maintenance policies. Under RCM, qualitative processes that enable the prioritizing of functions based on the criticality and influence would be combined with mathematical modeling to obtain the optimal maintenance policies. Where this thesis work deviates from RCM is its proposal to directly apply quantitative processes to model the reliability measures in optimal maintenance problem. First, Monte Carlo (MC) simulation, in conjunction with a pre-determined Design of Experiments (DOE) table, can be used as a numerical means of obtaining the corresponding discrete simulated outcomes of the reliability measures based on the combination of decision variables (e.g., periodic preventive maintenance interval, trigger age for opportunistic maintenance, etc.). These discrete simulation results can then be regressed as Response Surface Equations (RSEs) with respect to the decision variables. Such an approach to represent the reliability measures with continuous surrogate functions (i.e., the RSEs) not only enables the application of the numerical optimization technique to solve for optimal maintenance policies, but also obviates the need to make mathematical assumptions or impose over-simplifications on the structure of a multi-unit system for the sake of mathematical tractability. The applicability of the proposed methodology to a real-world optimal maintenance problem is showcased through its application to a Time Limited Dispatch (TLD) of Full Authority Digital Engine Control (FADEC) system. In broader terms, this proof-of-concept exercise can be described as a constrained optimization problem, whose objective is to identify the optimal system inspection interval that guarantees a certain level of availability for a multi-unit system. A variety of reputable numerical techniques were used to model the problem as accurately as possible, including algorithms for the MC simulation, imperfect maintenance model from quasi renewal processes, repair time simulation, and state transition rules. Variance Reduction Techniques (VRTs) were also used in an effort to enhance MC simulation efficiency. After accurate MC simulation results are obtained, the RSEs are generated based on the goodness-of-fit measure to yield as parsimonious model as possible to construct the optimization problem. Under the assumption of constant failure rate for lifetime distributions, the inspection interval from the proposed methodology was found to be consistent with the one from the common approach used in industry that leverages Continuous Time Markov Chain (CTMC). While the latter does not consider maintenance cost settings, the proposed methodology enables an operator to consider different types of maintenance cost settings, e.g., inspection cost, system corrective maintenance cost, etc., to result in more flexible maintenance policies. When the proposed methodology was applied to the same TLD of FADEC example, but under the more generalized assumption of strictly Increasing Failure Rate (IFR) for lifetime distribution, it was shown to successfully capture component wear-out, as well as the economic dependencies among the system components.
Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks
Ranganathan, Radha; Kannan, Kathiravan
2015-01-01
IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066
Sled Towing Acutely Decreases Acceleration Sprint Time.
Wong, Megan A; Dobbs, Ian J; Watkins, Casey M; Barillas, Saldiam R; Lin, Anne; Archer, David C; Lockie, Robert G; Coburn, Jared W; Brown, Lee E
2017-11-01
Wong, MA, Dobbs, IJ, Watkins, C, Barillas, SR, Lin, A, Archer, DC, Lockie, RG, Coburn, JW, and Brown, LE. Sled towing acutely decreases acceleration sprint time. J Strength Cond Res 31(11): 3046-3051, 2017-Sled towing is a common form of overload training in sports to develop muscular strength for sprinting. This type of training leads to acute and chronic outcomes. Acute training potentially leads to postactivation potentiation (PAP), which is when subsequent muscle performance is enhanced after a preload stimulus. The purpose of this study was to determine differences between rest intervals after sled towing on acute sprint speed. Twenty healthy recreationally trained men (age = 22.3 ± 2.4 years, height = 176.95 ± 5.46 cm, mass = 83.19 ± 11.31 kg) who were currently active in a field sport twice a week for the last 6 months volunteered to participate. A maximal 30-meter (m) baseline (BL) body mass (BM) sprint was performed (with splits at 5, 10, 20, and 30 m) followed by 5 visits where participants sprinted 30 m towing a sled at 30% BM then rested for 2, 4, 6, 8, or 12 minutes. They were instructed to stand still during rest times. After the rest interval, they performed a maximal 30-m post-test BM sprint. Analysis of variance (ANOVA) revealed that post sled tow BM sprint times (4.47 ± 0.21 seconds) were less than BL times (4.55 ± 0.18 seconds) on an individualized rest interval basis. A follow-up 2 × 4 ANOVA showed that this decrease occurred only in the acceleration phase over the first 5 m (BL = 1.13 ± 0.08 seconds vs. Best = 1.08 ± 0.08 seconds), which may be the result of PAP and the complex relationship between fatigue and potentiation relative to the intensity of the sled tow and the rest interval. Therefore, coaches should test their athletes on an individual basis to determine optimal rest time after a 30-m 30% BM sled tow to enhance acute sprint speed.
Balancing out dwelling and moving: optimal sensorimotor synchronization
Girard, Benoît; Guigon, Emmanuel
2015-01-01
Sensorimotor synchronization is a fundamental skill involved in the performance of many artistic activities (e.g., music, dance). After a century of research, the manner in which the nervous system produces synchronized movements remains poorly understood. Typical rhythmic movements involve a motion and a motionless phase (dwell). The dwell phase represents a sizable fraction of the rhythm period, and scales with it. The rationale for this organization remains unexplained and is the object of this study. Twelve participants, four drummers (D) and eight nondrummers (ND), performed tapping movements paced at 0.5–2.5 Hz by a metronome. The participants organized their tapping behavior into dwell and movement phases according to two strategies: 1) Eight participants (1 D, 7 ND) maintained an almost constant ratio of movement time (MT) and dwell time (DT) irrespective of the metronome period. 2) Four participants increased the proportion of DT as the period increased. The temporal variabilities of both the dwell and movement phases were consistent with Weber's law, i.e., their variability increased with their durations, and the longest phase always exhibited the smallest variability. We developed an optimal statistical model that formalized the distribution of time into dwell and movement intervals as a function of their temporal variability. The model accurately predicted the participants' dwell and movement durations irrespective of their strategy and musical skill, strongly suggesting that the distribution of DT and MT results from an optimization process, dependent on each participant's skill to predict time during rest and movement. PMID:25878154
Optimization Based Data Mining Approah for Forecasting Real-Time Energy Demand
DOE Office of Scientific and Technical Information (OSTI.GOV)
Omitaomu, Olufemi A; Li, Xueping; Zhou, Shengchao
The worldwide concern over environmental degradation, increasing pressure on electric utility companies to meet peak energy demand, and the requirement to avoid purchasing power from the real-time energy market are motivating the utility companies to explore new approaches for forecasting energy demand. Until now, most approaches for forecasting energy demand rely on monthly electrical consumption data. The emergence of smart meters data is changing the data space for electric utility companies, and creating opportunities for utility companies to collect and analyze energy consumption data at a much finer temporal resolution of at least 15-minutes interval. While the data granularity providedmore » by smart meters is important, there are still other challenges in forecasting energy demand; these challenges include lack of information about appliances usage and occupants behavior. Consequently, in this paper, we develop an optimization based data mining approach for forecasting real-time energy demand using smart meters data. The objective of our approach is to develop a robust estimation of energy demand without access to these other building and behavior data. Specifically, the forecasting problem is formulated as a quadratic programming problem and solved using the so-called support vector machine (SVM) technique in an online setting. The parameters of the SVM technique are optimized using simulated annealing approach. The proposed approach is applied to hourly smart meters data for several residential customers over several days.« less
Viability of dental implants in head and neck irradiated patients: A systematic review.
Zen Filho, Edson Virgílio; Tolentino, Elen de Souza; Santos, Paulo Sérgio Silva
2016-04-01
The purpose of this systematic review was to evaluate the safety of dental implants placed in irradiated bone and to discuss their viability when placed post-radiotherapy (RT). A systematic review was performed to answer the questions: "Are dental implants in irradiated bone viable?" and "What are the main factors that influence the loss of implants in irradiated patients?" The search strategy resulted in 8 publications. A total of 331 patients received 1237 implants, with an overall failure rate of 9.53%. The osseointegration success rates ranged between 62.5% and 100%. The optimal time interval between irradiation and dental implantation varied from 6 to 15 months. The interval time between RT and implant placement and the radiation doses are not associated with significant implant failure rates. The placement of implants in irradiated bone is viable, and head and neck RT should not be considered as a contraindication for dental rehabilitation with implants. © 2015 Wiley Periodicals, Inc. Head Neck 38: E2229-E2240, 2016. © 2015 Wiley Periodicals, Inc.
Diffusion with stochastic resetting at power-law times.
Nagar, Apoorva; Gupta, Shamik
2016-06-01
What happens when a continuously evolving stochastic process is interrupted with large changes at random intervals τ distributed as a power law ∼τ^{-(1+α)};α>0? Modeling the stochastic process by diffusion and the large changes as abrupt resets to the initial condition, we obtain exact closed-form expressions for both static and dynamic quantities, while accounting for strong correlations implied by a power law. Our results show that the resulting dynamics exhibits a spectrum of rich long-time behavior, from an ever-spreading spatial distribution for α<1, to one that is time independent for α>1. The dynamics has strong consequences on the time to reach a distant target for the first time; we specifically show that there exists an optimal α that minimizes the mean time to reach the target, thereby offering a step towards a viable strategy to locate targets in a crowded environment.
Lower Limb Function in Elderly Korean Adults Is Related to Cognitive Function.
Kim, A-Sol; Ko, Hae-Jin
2018-05-01
Patients with cognitive impairment have decreased lower limb function. Therefore, we aimed to investigate the relationship between lower limb function and cognitive disorders to determine whether lower limb function can be screened to identify cognitive decline. Using Korean National Health Insurance Service-National Sample Cohort database data, we assessed the cognitive and lower limb functioning of 66-year-olds who underwent national health screening between 2010 and 2014. Cognitive function was assessed via a questionnaire. Timed Up-and-Go (TUG) and one-leg-standing (OLS) tests were performed to evaluate lower limb function. Associations between cognitive and lower limb functions were analyzed, and optimal cut-off points for these tests to screen for cognitive decline, were determined. Cognitive function was significantly correlated with TUG interval ( r = 0.414, p < 0.001) and OLS duration ( r = −0.237, p < 0.001). Optimal cut-off points for screening cognitive disorders were >11 s and ≤12 s for TUG interval and OLS duration, respectively. Among 66-year-olds who underwent national health screening, a significant correlation between lower limb and cognitive function was demonstrated. The TUG and OLS tests are useful screening tools for cognitive disorders in elderly patients. A large-scale prospective cohort study should be conducted to investigate the causal relationship between cognitive and lower limb function.
Stock optimizing in choice when a token deposit is the operant.
Widholm, J J; Silberberg, A; Hursh, S R; Imam, A A; Warren-Boulton, F R
2001-11-01
Each of 2 monkeys typically earned their daily food ration by depositing tokens in one of two slots. Tokens deposited in one slot dropped into a bin where they were kept (token kept). Deposits to a second slot dropped into a bin where they could be obtained again (token returned). In Experiment 1, a fixed-ratio (FR) 5 schedule that provided two food pellets was associated with each slot. Both monkeys preferred the token-returned slot. In Experiment 2, both subjects chose between unequal FR schedules with the token-returned slot always associated with the leaner schedule. When the FRs were 2 versus 3 and 2 versus 6, preferences were maintained for the token-returned slot; however, when the ratios were 2 versus 12, preference shifted to the token-kept slot. In Experiment 3, both monkeys chose between equal-valued concurrent variable-interval variable-interval schedules. Both monkeys preferred the slot that returned tokens. In Experiment 4, both monkeys chose between FRs that typically differed in size by a factor of 10. Both monkeys preferred the FR schedule that provided more food per trial. These data show that monkeys will choose so as to increase the number of reinforcers earned (stock optimizing) even when this preference reduces the rate of reinforcement (all reinforcers divided by session time).
Paula, T O M; Marinho, C D; Amaral Júnior, A T; Peternelli, L A; Gonçalves, L S A
2013-06-27
The objective of this study was to determine the optimal number of repetitions to be used in competition trials of popcorn traits related to production and quality, including grain yield and expansion capacity. The experiments were conducted in 3 environments representative of the north and northwest regions of the State of Rio de Janeiro with 10 Brazilian genotypes of popcorn, consisting by 4 commercial hybrids (IAC 112, IAC 125, Zélia, and Jade), 4 improved varieties (BRS Ângela, UFVM-2 Barão de Viçosa, Beija-flor, and Viçosa) and 2 experimental populations (UNB2U-C3 and UNB2U-C4). The experimental design utilized was a randomized complete block design with 7 repetitions. The Bootstrap method was employed to obtain samples of all of the possible combinations within the 7 blocks. Subsequently, the confidence intervals of the parameters of interest were calculated for all simulated data sets. The optimal number of repetition for all of the traits was considered when all of the estimates of the parameters in question were encountered within the confidence interval. The estimates of the number of repetitions varied according to the parameter estimated, variable evaluated, and environment cultivated, ranging from 2 to 7. It is believed that only the expansion capacity traits in the Colégio Agrícola environment (for residual variance and coefficient of variation), and number of ears per plot, in the Itaocara environment (for coefficient of variation) needed 7 repetitions to fall within the confidence interval. Thus, for the 3 studies conducted, we can conclude that 6 repetitions are optimal for obtaining high experimental precision.
NASA Astrophysics Data System (ADS)
Mudelsee, Manfred
2015-04-01
The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, J; Pompos, A; Jiang, S
Purpose: To put forth an innovative clinical paradigm for weekly chart checking so that treatment status is periodically checked accurately and efficiently. This study also aims to help optimize the chart checking clinical workflow in a busy radiation therapy clinic. Methods: It is mandated by the Texas Administrative code to check patient charts of radiation therapy once a week or every five fractions, however it varies drastically among institutions in terms of when and how it is done. Some do it every day, but a lot of efforts are wasted on opening ineligible charts; some do it on a fixedmore » day but the distribution of intervals between subsequent checks is not optimal. To establish an optimal chart checking procedure, a new paradigm was developed to achieve 1) charts are checked more accurately and more efficiently; 2) charts are checked on optimal days without any miss; 3) workload is evened out throughout a week when multiple physicists are involved. All active charts will be accessed by querying the R&V system. Priority is assigned to each chart based on the number of days before the next due date followed by sorting and workload distribution steps. New charts are also taken into account when distributing the workload so it is reasonably even throughout the week. Results: Our clinical workflow became more streamlined and smooth. In addition, charts get checked in a more timely fashion so that errors would get caught earlier should they occur. Conclusion: We developed a new weekly chart checking diagram. It helps physicists check charts in a timely manner, saves their time in busy clinics, and consequently reduces possible errors.« less
The Whole Heliosphere Interval: Campaign Summaries and Early Results
NASA Technical Reports Server (NTRS)
Thompson, Barbara J.; Gibson, Sarah E.; Kozyra, Janet U.
2008-01-01
The Whole Heliosphere Interval (WHI) is an internationally coordinated observing and modeling effort to characterize the 3-dimensional interconnected solar-heliospheric-planetary system - a.k.a. the "heliophysical" system. The heart of the WHI campaign is the study of the interconnected 3-D heliophysical domain, from the interior of the Sun, to the Earth, outer planets, and into interstellar space. WHI observing campaigns began with the 3-0 solar structure from solar Carrington Rotation 2068, which ran from March 20 - April 16, 2008. Observations and models of the outer heliosphere and planetary impacts extended beyond those dates as necessary; for example, the solar wind transit time to outer planets can take months. WHI occurs during solar minimum, which optimizes our ability to characterize the 3-D heliosphere and trace the structure to the outer limits of the heliosphere. A summary of some of the key results from the WHI first workshop in August 2008 will be given.
The influence of interpregnancy interval on infant mortality.
McKinney, David; House, Melissa; Chen, Aimin; Muglia, Louis; DeFranco, Emily
2017-03-01
In Ohio, the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birthweight, but the effect of short interpregnancy interval on infant mortality is less well established. We sought to quantify the population impact of interpregnancy interval on the risk of infant mortality. This was a statewide population-based retrospective cohort study of all births (n = 1,131,070) and infant mortalities (n = 8152) using linked Ohio birth and infant death records from January 2007 through September 2014. For this study we analyzed 5 interpregnancy interval categories: 0-<6, 6-<12, 12-<24, 24-<60, and ≥60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk of infant mortality for each interval compared to a referent interval of 12-<24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Short interpregnancy intervals were common in Ohio during the study period. Of all multiparous births, 20.5% followed an interval of <12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births following short intervals of 0-<6 months (9.2 per 1000) and 6-<12 months (7.1 per 1000) compared to 12-<24 months (5.6 per 1000) (P < .001 and <.001). The highest risk for infant mortality followed interpregnancy intervals of 0-<6 months (adjusted relative risk, 1.32; 95% confidence interval, 1.17-1.49) followed by interpregnancy intervals of 6-<12 months (adjusted relative risk, 1.16; 95% confidence interval, 1.04-1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0-<6 months and 14.1% with intervals of 6-<12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of ≤12 months we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2-7.0 during this time frame. An interpregnancy interval of 12-60 months (1-5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. Copyright © 2017 Elsevier Inc. All rights reserved.
An hp symplectic pseudospectral method for nonlinear optimal control
NASA Astrophysics Data System (ADS)
Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong
2017-01-01
An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.
Habib, Basant A; AbouGhaly, Mohamed H H
2016-06-01
This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.
Mirzai, S; Safi, S; Mossavari, N; Afshar, D; Bolourchian, M
2016-08-31
The present study was conducted to establish a Loop-mediated isothermal amplification (LAMP) technique for the rapid detection of B. mallei the etiologic agent of glanders, a highly contagious disease of equines. A set of six specific primers targeting integrase gene cluster were designed for the LAMP test. The reaction was optimized using different temperatures and time intervals. The specificity of the assay was evaluated using DNA from B.pseudomallei and Pseudomonas aeruginosa. The LAMP products were analyzed both visually and under UV light after electrophoresis. The optimized conditions were found to be at 63ºC for 60 min. The assay showed high specificity and sensitivity. It was concluded that the established LAMP assay is a rapid, sensitive and practical tool for detection of B. mallei and early diagnosis of glanders.
Knox, R V; Shen, J; Greiner, L L; Connor, J F
2016-12-01
Variation in gilt fertility is associated with increased replacement and reduced longevity. Stress before breeding is hypothesized to be involved in reduced fertility. This study tested the timing of gilt relocation from pens to individual stalls before breeding on fertility and well-being. The experiment was performed in replicates on a commercial research farm. After detection of first estrus, gilts ( = 563) were assigned to treatment for relocation into stalls 3 wk (REL3wk), 2 wk (REL2wk), or 1 wk (REL1wk) before breeding at second estrus. Subsets of gilts from each treatment ( = 60) were selected for assessment of follicles at second estrus. Data included interestrus interval, number of services, conception, farrowing, total born, and wean to service interval. Piglet birth weight was obtained on subsets of litters ( = 42/treatment). Measures of well-being included BW, backfat, BCS, lesions, and lameness from wk 1 after first estrus until wk 16. Gilt BW at wk 5 (158.4 kg) was not affected ( > 0.10) by treatment. Measures of BCS, lameness, and lesions at breeding and throughout gestation did not differ ( > 0.10). Treatment did not affect ( > 0.10) gilts expressing a normal interestrus interval of 18 to 24 d (83.4%) but did influence ( < 0.05) the proportion expressing shorter ( < 0.001) and longer ( < 0.001) intervals. Gilts in REL3wk had a shorter ( < 0.001) interestrus interval (20.7 d) than those in REL2wk and REL1wk (22.6 d). Gilts with shorter intervals ( = 24) had fewer total born while gilts expressing longer cycles ( = 65) had reduced farrowing rates. The number of services (1.9) and number of follicles (19.7) at breeding were not affected ( > 0.10) by relocation. There was no effect of treatment on farrowing rate (85.2%), born alive (12.6), or any litter birth weight measures ( > 0.10). The percentage of sows bred within 7 d after weaning (94.4%) was also not affected by treatment ( > 0.10). These results suggest that the timing of relocation before breeding had no effect on well-being or on the majority of gilts with normal estrous cycles and their subsequent fertility. However, a smaller proportion of the gilts exhibited shorter and longer interestrus intervals in response to relocation 1 or 3 wk before breeding. In cases where gilt fertility may be less than optimal, producers that relocate gilts from pens to stalls before breeding should evaluate interestrus interval as a response criterion.
Energy optimization for upstream data transfer in 802.15.4 beacon-enabled star formulation
NASA Astrophysics Data System (ADS)
Liu, Hua; Krishnamachari, Bhaskar
2008-08-01
Energy saving is one of the major concerns for low rate personal area networks. This paper models energy consumption for beacon-enabled time-slotted media accessing control cooperated with sleeping scheduling in a star network formulation for IEEE 802.15.4 standard. We investigate two different upstream (data transfer from devices to a network coordinator) strategies: a) tracking strategy: the devices wake up and check status (track the beacon) in each time slot; b) non-tracking strategy: nodes only wake-up upon data arriving and stay awake till data transmitted to the coordinator. We consider the tradeoff between energy cost and average data transmission delay for both strategies. Both scenarios are formulated as optimization problems and the optimal solutions are discussed. Our results show that different data arrival rate and system parameters (such as contention access period interval, upstream speed etc.) result in different strategies in terms of energy optimization with maximum delay constraints. Hence, according to different applications and system settings, different strategies might be chosen by each node to achieve energy optimization for both self-interested view and system view. We give the relation among the tunable parameters by formulas and plots to illustrate which strategy is better under corresponding parameters. There are two main points emphasized in our results with delay constraints: on one hand, when the system setting is fixed by coordinator, nodes in the network can intelligently change their strategies according to corresponding application data arrival rate; on the other hand, when the nodes' applications are known by the coordinator, the coordinator can tune the system parameters to achieve optimal system energy consumption.
On reliable control system designs. Ph.D. Thesis; [actuators
NASA Technical Reports Server (NTRS)
Birdwell, J. D.
1978-01-01
A mathematical model for use in the design of reliable multivariable control systems is discussed with special emphasis on actuator failures and necessary actuator redundancy levels. The model consists of a linear time invariant discrete time dynamical system. Configuration changes in the system dynamics are governed by a Markov chain that includes transition probabilities from one configuration state to another. The performance index is a standard quadratic cost functional, over an infinite time interval. The actual system configuration can be deduced with a one step delay. The calculation of the optimal control law requires the solution of a set of highly coupled Riccati-like matrix difference equations. Results can be used for off-line studies relating the open loop dynamics, required performance, actuator mean time to failure, and functional or identical actuator redundancy, with and without feedback gain reconfiguration strategies.
Lei, Xiaohui; Wang, Chao; Yue, Dong; Xie, Xiangpeng
2017-01-01
Since wind power is integrated into the thermal power operation system, dynamic economic emission dispatch (DEED) has become a new challenge due to its uncertain characteristics. This paper proposes an adaptive grid based multi-objective Cauchy differential evolution (AGB-MOCDE) for solving stochastic DEED with wind power uncertainty. To properly deal with wind power uncertainty, some scenarios are generated to simulate those possible situations by dividing the uncertainty domain into different intervals, the probability of each interval can be calculated using the cumulative distribution function, and a stochastic DEED model can be formulated under different scenarios. For enhancing the optimization efficiency, Cauchy mutation operation is utilized to improve differential evolution by adjusting the population diversity during the population evolution process, and an adaptive grid is constructed for retaining diversity distribution of Pareto front. With consideration of large number of generated scenarios, the reduction mechanism is carried out to decrease the scenarios number with covariance relationships, which can greatly decrease the computational complexity. Moreover, the constraint-handling technique is also utilized to deal with the system load balance while considering transmission loss among thermal units and wind farms, all the constraint limits can be satisfied under the permitted accuracy. After the proposed method is simulated on three test systems, the obtained results reveal that in comparison with other alternatives, the proposed AGB-MOCDE can optimize the DEED problem while handling all constraint limits, and the optimal scheme of stochastic DEED can decrease the conservation of interval optimization, which can provide a more valuable optimal scheme for real-world applications. PMID:28961262
An interval programming model for continuous improvement in micro-manufacturing
NASA Astrophysics Data System (ADS)
Ouyang, Linhan; Ma, Yizhong; Wang, Jianjun; Tu, Yiliu; Byun, Jai-Hyun
2018-03-01
Continuous quality improvement in micro-manufacturing processes relies on optimization strategies that relate an output performance to a set of machining parameters. However, when determining the optimal machining parameters in a micro-manufacturing process, the economics of continuous quality improvement and decision makers' preference information are typically neglected. This article proposes an economic continuous improvement strategy based on an interval programming model. The proposed strategy differs from previous studies in two ways. First, an interval programming model is proposed to measure the quality level, where decision makers' preference information is considered in order to determine the weight of location and dispersion effects. Second, the proposed strategy is a more flexible approach since it considers the trade-off between the quality level and the associated costs, and leaves engineers a larger decision space through adjusting the quality level. The proposed strategy is compared with its conventional counterparts using an Nd:YLF laser beam micro-drilling process.
Optimism and Cause-Specific Mortality: A Prospective Cohort Study
Kim, Eric S.; Hagan, Kaitlin A.; Grodstein, Francine; DeMeo, Dawn L.; De Vivo, Immaculata; Kubzansky, Laura D.
2017-01-01
Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses’ Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. PMID:27927621
Optimal sixteenth order convergent method based on quasi-Hermite interpolation for computing roots.
Zafar, Fiza; Hussain, Nawab; Fatimah, Zirwah; Kharal, Athar
2014-01-01
We have given a four-step, multipoint iterative method without memory for solving nonlinear equations. The method is constructed by using quasi-Hermite interpolation and has order of convergence sixteen. As this method requires four function evaluations and one derivative evaluation at each step, it is optimal in the sense of the Kung and Traub conjecture. The comparisons are given with some other newly developed sixteenth-order methods. Interval Newton's method is also used for finding the enough accurate initial approximations. Some figures show the enclosure of finitely many zeroes of nonlinear equations in an interval. Basins of attractions show the effectiveness of the method.
NASA Astrophysics Data System (ADS)
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-01
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp = 0.9180 and RMSEP = 2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine.
Mena-Vázquez, Natalia; Manrique-Arija, Sara; Rojas-Giménez, Marta; Ureña-Garnica, Inmaculada; Jiménez-Núñez, Francisco G; Fernández-Nebro, Antonio
2017-07-01
To evaluate the effectiveness and safety of tocilizumab (TCZ) in patients with rheumatoid arthritis (RA) in clinical practice, establishing the optimized regimen and switching from intravenous (IV) to subcutaneous (SC) therapy. Retrospective observational study. We included 53 RA patients treated with TCZ. The main outcome was TCZ effectiveness at week 24. Secondary outcome variables included effectiveness at week 52, therapeutic maintenance, physical function and safety. The effectiveness of optimization and the switch from IV to SC was evaluated at 3 and 6 months. The efficacy was measured with the Disease Activity Score. Paired t-tests or Wilcoxon were used to evaluate effectiveness and survival time using Kaplan-Meier. The proportion of patients who achieved remission or low disease activity at weeks 24 and 52 was 75.5% and 87.3%, respectively. The mean retention time (95% confidence interval [95% CI] was 81.7 months [76.6-86.7]). Twenty-one of 53 patients (39.6%) optimized the TCZ dose and 35 patients switched from IV TCZ to SC, with no changes in effectiveness. The adverse event rate was 13.6 events/100 patient-years. Tocilizumab appears to be effective and safe in RA in clinical practice. The optimized regimen appears to be effective in most patients in remission, even when they change from IV to SC. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Enhanced Fuel-Optimal Trajectory-Generation Algorithm for Planetary Pinpoint Landing
NASA Technical Reports Server (NTRS)
Acikmese, Behcet; Blackmore, James C.; Scharf, Daniel P.
2011-01-01
An enhanced algorithm is developed that builds on a previous innovation of fuel-optimal powered-descent guidance (PDG) for planetary pinpoint landing. The PDG problem is to compute constrained, fuel-optimal trajectories to land a craft at a prescribed target on a planetary surface, starting from a parachute cut-off point and using a throttleable descent engine. The previous innovation showed the minimal-fuel PDG problem can be posed as a convex optimization problem, in particular, as a Second-Order Cone Program, which can be solved to global optimality with deterministic convergence properties, and hence is a candidate for onboard implementation. To increase the speed and robustness of this convex PDG algorithm for possible onboard implementation, the following enhancements are incorporated: 1) Fast detection of infeasibility (i.e., control authority is not sufficient for soft-landing) for subsequent fault response. 2) The use of a piecewise-linear control parameterization, providing smooth solution trajectories and increasing computational efficiency. 3) An enhanced line-search algorithm for optimal time-of-flight, providing quicker convergence and bounding the number of path-planning iterations needed. 4) An additional constraint that analytically guarantees inter-sample satisfaction of glide-slope and non-sub-surface flight constraints, allowing larger discretizations and, hence, faster optimization. 5) Explicit incorporation of Mars rotation rate into the trajectory computation for improved targeting accuracy. These enhancements allow faster convergence to the fuel-optimal solution and, more importantly, remove the need for a "human-in-the-loop," as constraints will be satisfied over the entire path-planning interval independent of step-size (as opposed to just at the discrete time points) and infeasible initial conditions are immediately detected. Finally, while the PDG stage is typically only a few minutes, ignoring the rotation rate of Mars can introduce 10s of meters of error. By incorporating it, the enhanced PDG algorithm becomes capable of pinpoint targeting.
[Triage duration times: a prospective descriptive study in a level 1° emergency department].
Bambi, Stefano; Ruggeri, Marco
2017-01-01
Triage is the most important tool for clinical risk management in emergency departments (ED). The timing measurement of its phases is fundamental to establish indicators and standards for the optimization of the system. To evaluate the duration time of the phases of triage; to evaluate some variables exerting influence on nurses' performance. prospective descriptive study performed in the ED of Careggi Teaching Hospital in Florence. 14 nurses enrolled by stratified randomization proportion (1/3 of the whole staff ), according to classes of length of service. Triage processes on 150 adult patients were recorded. The mean age of nurses was 39.7 years (SD ± 5.2, range 29-50); the average length of service was 10.3 years (SD ± 4.4, range 3-18); average of triage experience was 8.6 years (SD ± 4.3, range 2-13). The median time from patient's arrival to the end of the triage process was 04': 04" (range 00':47"- 18':08"); the median duration of triage was 01':11" (range 00':07" -11':27"). The length of service and triage experience did not influence the medians of recorded intervals of time, but there were some limitations due to the low sample size. Interruptions were observed 111 (74%) of triage cases. the recorded triage time intervals were similar to those reported in international literature. Actions are needed to reduce the impact of interruptions on triage process' times.
Predicting Culex pipiens/restuans population dynamics by interval lagged weather data
2013-01-01
Background Culex pipiens/restuans mosquitoes are important vectors for a variety of arthropod borne viral infections. In this study, the associations between 20 years of mosquito capture data and the time lagged environmental quantities daytime length, temperature, precipitation, relative humidity and wind speed were used to generate a predictive model for the population dynamics of this vector species. Methods Mosquito population in the study area was represented by averaged time series of mosquitos counts captured at 6 sites in Cook County (Illinois, USA). Cross-correlation maps (CCMs) were compiled to investigate the association between mosquito abundances and environmental quantities. The results obtained from the CCMs were incorporated into a Poisson regression to generate a predictive model. To optimize the predictive model the time lags obtained from the CCMs were adjusted using a genetic algorithm. Results CCMs for weekly data showed a highly positive correlation of mosquito abundances with daytime length 4 to 5 weeks prior to capture (quantified by a Spearman rank order correlation of rS = 0.898) and with temperature during 2 weeks prior to capture (rS = 0.870). Maximal negative correlations were found for wind speed averaged over 3 week prior to capture (rS = −0.621). Cx. pipiens/restuans population dynamics was predicted by integrating the CCM results in Poisson regression models. They were used to simulate the average seasonal cycle of the mosquito abundance. Verification with observations resulted in a correlation of rS = 0.899 for daily and rS = 0.917 for weekly data. Applying the optimized models to the entire 20-years time series also resulted in a suitable fit with rS = 0.876 for daily and rS = 0.899 for weekly data. Conclusions The study demonstrates the application of interval lagged weather data to predict mosquito abundances with a feasible accuracy, especially when related to weekly Cx. pipiens/restuans populations. PMID:23634763
NASA Astrophysics Data System (ADS)
Sha, X.; Xu, K.; Bentley, S. J.; Robichaux, P.
2016-02-01
Although many studies of sediment diversions have been conducted on the Mississippi Delta, relatively less attention has been paid to understanding sediment retention and basic cohesive sedimentation processes in receiving basins. Our research evaluates long-term (up to six months) sedimentation processes through various laboratory experiments, especially cohesive sediment settling, consolidation and resuspension and their impacts on sediment retention. Bulk sediment samples were collected from West Bay, near Head of Passes of the Mississippi Delta, and the Big Mar basin that receive water and sediment from the Caernarvon Diversion in the upper Breton Sound region of Louisiana, USA. A-230-cm tall settling column with nine sampling ports at 15 cm intervals was used to measure the consolidation for four initial sediment concentrations (10-120 kg m-3) with two salinities (1 ppt & 5 ppt). Samples of sediment slurry were taken from every port at different time intervals up to 15 days or longer (higher concentration needs longer time to consolidate) to record concentrations gravimetrically. A 200 cm long tube was connected to a 50 cm long core chamber to accumulate at least a 10 cm thick sediment column for erosion tests. A dual-core Gust Erosion Microcosm System was employed to measure time-series (0.5, 1, 2, 3, 4, 5, 6 months) erodibility at seven shear stress regimes (0.01-0.60 Pa). Our preliminary results show a significant decrease of erodibility with time and high concentration (120g/L). Salinity impacted on sediment behavior in consolidation experiments. Our study reveals that more enclosed receiving basins, intermittent openings of diversions, or reduced shear stress due to man-made structure all can potentially reduce cohesive sediment erosion in coastal Louisiana. Further results will be analyzed to determine the model constants. Consolidating rates and corresponding erosional changes will be determined to optimize sediment retention in coastal protection.
Sarridou, Despoina G; Chalmouki, Georgia; Braoudaki, Maria; Koutsoupaki, Anna; Mela, Argiro; Vadalouka, Athina
2015-01-01
Up until now, the optimal strategy for postoperative pain management after total knee arthroplasty (TKA) remains to be elucidated. The current investigation aimed to examine the analgesic efficacy and the opioid sparing effects of intravenous parecoxib in combination with continuous femoral blockade. Randomized, double-blind, prospective trial. University hospital in the United Kingdom. In total, 90 patients underwent TKA under subarachnoid anesthesia and received continuous femoral block initially as a bolus with 20 mL of ropivacaine 0.75%. Infusion of 0.2% on 10 mL/h followed. Patients were randomized into 2 groups. Group D and Group P received parecoxib and placebo, respectively at 12 hour time intervals. Visual analog scale (VAS) pain scores were obtained at different time intervals including 4, 8, 12, 24 and 36 hours. The pain scores were measured with patients in a resting position. Morphine could also be administered with a patient controlled analgesia (PCA) pump if the specified analgesia was deemed inadequate (VAS > 5). None of the patients were withdrawn from the study. Parecoxib provided greater relief than placebo following TKA. The VAS pain scores measured at rest were statistically significantly lower in parecoxib-treated patients compared to the placebo group (P = 0.007) at 4 (P = 0.044), 12 (P = 0.001), and 24 hours (P = 0.012), postoperatively. Patients receiving parecoxib consumed less morphine at all time intervals than patients receiving placebo, with borderline statistical significance (P = 0.054). In each time period, all patients receiving continuous femoral block irrespectively of the treatment group, required low morphine doses. Current protocol did not answer question as to functional recovery. According to our findings intravenous parecoxib in combination with continuous femoral block provided superior analgesic efficacy and opioid sparing effects in patients undergoing TKA.
The Influence of Interpregnancy Interval on Infant Mortality
MCKINNEY, David; HOUSE, Melissa; CHEN, Aimin; MUGLIA, Louis; DEFRANCO, Emily
2017-01-01
Background In Ohio the infant mortality rate is above the national average and the black infant mortality rate is more than twice the white infant mortality rate. Having a short interpregnancy interval has been shown to correlate with preterm birth and low birth weight, but the effect of short interpregnancy interval on infant mortality is less well established. Objective To quantify the population impact of interpregnancy interval on the risk of infant mortality. Study Design This was a statewide population-based retrospective cohort study of all births (n=1,131,070) and infant mortalities (n=8,152) using linked Ohio birth and infant death records from 1/2007 through 9/2014. For this study we analyzed 5 interpregnancy interval categories: 0 to < 6 months, 6 to < 12 months, 12 to < 24 months, 24 to < 60 months, and ≥ 60 months. The primary outcome for this study was infant mortality. During the study period, 3701 infant mortalities were linked to a live birth certificate with an interpregnancy interval available. We calculated the frequency and relative risk (RR) of infant mortality for each interval compared to a referent interval of 12 to < 24 months. Stratified analyses by maternal race were also performed. Adjusted risks were estimated after accounting for statistically significant and biologically plausible confounding variables. Adjusted relative risk was utilized to calculate the attributable risk percent of short interpregnancy intervals on infant mortality. Results Short interpregnancy intervals were common in Ohio during the study period. 20.5% of all multiparous births followed an interval of < 12 months. The overall infant mortality rate during this time was 7.2 per 1000 live births (6.0 for white mothers and 13.1 for black mothers). Infant mortalities occurred more frequently for births that occurred following short intervals of 0 to < 6 months (9.2 per 1000) and 6 to < 12 months (7.1 per 1000) compared to 12 to < 24 months (5.6 per 1000), (p= <0.001 and <0.001). The highest risk for infant mortality followed interpregnancy intervals of 0 to < 6 months, adjRR 1.32 (95% CI 1.17–1.49) followed by interpregnancy intervals of 6 to < 12 months, adjRR 1.16 (95% CI 1.04–1.30). Analysis stratified by maternal race revealed similar findings. Attributable risk calculation showed that 24.2% of infant mortalities following intervals of 0 to < 6 months and 14.1% with intervals of 6 to < 12 months are attributable to the short interpregnancy interval. By avoiding short interpregnancy intervals of 12 months or less we estimate that in the state of Ohio 31 infant mortalities (20 white and 8 black) per year could have been prevented and the infant mortality rate could have been reduced from 7.2 to 7.0 during this time frame. Conclusion An interpregnancy interval of 12–60 months (1–5 years) between birth and conception of next pregnancy is associated with lowest risk of infant mortality. Public health initiatives and provider counseling to optimize birth spacing has the potential to significantly reduce infant mortality for both white and black mothers. PMID:28034653
Costello, Rebecca B; Elin, Ronald J; Rosanoff, Andrea; Wallace, Taylor C; Guerrero-Romero, Fernando; Hruby, Adela; Lutsey, Pamela L; Nielsen, Forrest H; Rodriguez-Moran, Martha; Song, Yiqing; Van Horn, Linda V
2016-11-01
The 2015 Dietary Guidelines Advisory Committee indicated that magnesium was a shortfall nutrient that was underconsumed relative to the Estimated Average Requirement (EAR) for many Americans. Approximately 50% of Americans consume less than the EAR for magnesium, and some age groups consume substantially less. A growing body of literature from animal, epidemiologic, and clinical studies has demonstrated a varied pathologic role for magnesium deficiency that includes electrolyte, neurologic, musculoskeletal, and inflammatory disorders; osteoporosis; hypertension; cardiovascular diseases; metabolic syndrome; and diabetes. Studies have also demonstrated that magnesium deficiency is associated with several chronic diseases and that a reduced risk of these diseases is observed with higher magnesium intake or supplementation. Subclinical magnesium deficiency can exist despite the presentation of a normal status as defined within the current serum magnesium reference interval of 0.75-0.95 mmol/L. This reference interval was derived from data from NHANES I (1974), which was based on the distribution of serum magnesium in a normal population rather than clinical outcomes. What is needed is an evidenced-based serum magnesium reference interval that reflects optimal health and the current food environment and population. We present herein data from an array of scientific studies to support the perspective that subclinical deficiencies in magnesium exist, that they contribute to several chronic diseases, and that adopting a revised serum magnesium reference interval would improve clinical care and public health. © 2016 American Society for Nutrition.
Elin, Ronald J; Rosanoff, Andrea; Lutsey, Pamela L; Nielsen, Forrest H; Rodriguez-Moran, Martha
2016-01-01
The 2015 Dietary Guidelines Advisory Committee indicated that magnesium was a shortfall nutrient that was underconsumed relative to the Estimated Average Requirement (EAR) for many Americans. Approximately 50% of Americans consume less than the EAR for magnesium, and some age groups consume substantially less. A growing body of literature from animal, epidemiologic, and clinical studies has demonstrated a varied pathologic role for magnesium deficiency that includes electrolyte, neurologic, musculoskeletal, and inflammatory disorders; osteoporosis; hypertension; cardiovascular diseases; metabolic syndrome; and diabetes. Studies have also demonstrated that magnesium deficiency is associated with several chronic diseases and that a reduced risk of these diseases is observed with higher magnesium intake or supplementation. Subclinical magnesium deficiency can exist despite the presentation of a normal status as defined within the current serum magnesium reference interval of 0.75–0.95 mmol/L. This reference interval was derived from data from NHANES I (1974), which was based on the distribution of serum magnesium in a normal population rather than clinical outcomes. What is needed is an evidenced-based serum magnesium reference interval that reflects optimal health and the current food environment and population. We present herein data from an array of scientific studies to support the perspective that subclinical deficiencies in magnesium exist, that they contribute to several chronic diseases, and that adopting a revised serum magnesium reference interval would improve clinical care and public health. PMID:28140318
Ma, Guolin; Bai, Rongjie; Jiang, Huijie; Hao, Xuejia; Ling, Zaisheng; Li, Kefeng
2013-01-01
To develop an optimal scanning protocol for multislice spiral CT perfusion (CTP) imaging to evaluate hemodynamic changes in liver cirrhosis with diethylnitrosamine- (DEN-) induced precancerous lesions. Male Wistar rats were randomly divided into the control group (n = 80) and the precancerous liver cirrhosis group (n = 40). The control group received saline injection and the liver cirrhosis group received 50 mg/kg DEN i.p. twice a week for 12 weeks. All animals underwent plain CT scanning, CTP, and contrast-enhanced CT scanning. Scanning parameters were optimized by adjusting the diatrizoate concentration, the flow rate, and the delivery time. The hemodynamics of both groups was further compared using optimized multislice spiral CTP imaging. High-quality CTP images were obtained with following parameters: 150 kV; 150 mAs; 5 mm thickness, 5 mm interval; pitch, 1; matrix, 512 × 512; and FOV, 9.6 cm. Compared to the control group, the liver cirrhosis group had a significantly increased value of the hepatic arterial fraction and the hepatic artery perfusion (P < 0.05) but significantly decreased hepatic portal perfusion and mean transit time (P < 0.05). Multislice spiral CTP imaging can be used to evaluate the hemodynamic changes in the rat model of liver cirrhosis with precancerous lesions.
Wu, Yunqi; Hussain, Munir; Fassihi, Reza
2005-06-15
A simple spectrophotometric method for determination of glucosamine release from sustained release (SR) hydrophilic matrix tablet based on reaction with ninhydrin is developed, optimized and validated. The purple color (Ruhemann purple) resulted from the reaction was stabilized and measured at 570 nm. The method optimization was essential as many procedural parameters influenced the accuracy of determination including the ninhydrin concentration, reaction time, pH, reaction temperature, purple color stability period, and glucosamine/ninhydrin ratio. Glucosamine tablets (600 mg) with different hydrophilic polymers were formulated and manufactured on a rotary press. Dissolution studies were conducted (USP 26) using deionized water at 37+/-0.2 degrees C with paddle rotation of 50 rpm, and samples were removed manually at appropriate time intervals. Under given optimized reaction conditions that appeared to be critical, glucosamine was quantitatively analyzed and the calibration curve in the range of 0.202-2.020 mg (r=0.9999) was constructed. The recovery rate of the developed method was 97.8-101.7% (n=6). Reproducible dissolution profiles were achieved from the dissolution studies performed on different glucosamine tablets. The developed method is easy to use, accurate and highly cost-effective for routine studies relative to HPLC and other techniques.
Groundwater Pollution Source Identification using Linked ANN-Optimization Model
NASA Astrophysics Data System (ADS)
Ayaz, Md; Srivastava, Rajesh; Jain, Ashu
2014-05-01
Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.
We demonstrate how thermal-optical transmission analysis (TOT) for refractory light-absorbing carbon in atmospheric particulate matter was optimized with empirical response surface modeling. TOT employs pyrolysis to distinguish the mass of black carbon (BC) from organic carbon (...
Huang, Sha; Deshpande, Aadya; Yeo, Sing-Chen; Lo, June C; Chee, Michael W L; Gooley, Joshua J
2016-09-01
The ability to recall facts is improved when learning takes place at spaced intervals, or when sleep follows shortly after learning. However, many students cram for exams and trade sleep for other activities. The aim of this study was to examine the interaction of study spacing and time in bed (TIB) for sleep on vocabulary learning in adolescents. In the Need for Sleep Study, which used a parallel-group design, 56 adolescents aged 15-19 years were randomly assigned to a week of either 5 h or 9 h of TIB for sleep each night as part of a 14-day protocol conducted at a boarding school. During the sleep manipulation period, participants studied 40 Graduate Record Examination (GRE)-type English words using digital flashcards. Word pairs were presented over 4 consecutive days (spaced items), or all at once during single study sessions (massed items), with total study time kept constant across conditions. Recall performance was examined 0 h, 24 h, and 120 h after all items were studied. For all retention intervals examined, recall of massed items was impaired by a greater amount in adolescents exposed to sleep restriction. In contrast, cued recall performance on spaced items was similar between sleep groups. Spaced learning conferred strong protection against the effects of sleep restriction on recall performance, whereas students who had insufficient sleep were more likely to forget items studied over short time intervals. These findings in adolescents demonstrate the importance of combining good study habits and good sleep habits to optimize learning outcomes. © 2016 Associated Professional Sleep Societies, LLC.
Fang, Sinan; Pan, Heping; Du, Ting; Konaté, Ahmed Amara; Deng, Chengxiang; Qin, Zhen; Guo, Bo; Peng, Ling; Ma, Huolin; Li, Gang; Zhou, Feng
2016-01-01
This study applied the finite-difference time-domain (FDTD) method to forward modeling of the low-frequency crosswell electromagnetic (EM) method. Specifically, we implemented impulse sources and convolutional perfectly matched layer (CPML). In the process to strengthen CPML, we observed that some dispersion was induced by the real stretch κ, together with an angular variation of the phase velocity of the transverse electric plane wave; the conclusion was that this dispersion was positively related to the real stretch and was little affected by grid interval. To suppress the dispersion in the CPML, we first derived the analytical solution for the radiation field of the magneto-dipole impulse source in the time domain. Then, a numerical simulation of CPML absorption with high-frequency pulses qualitatively amplified the dispersion laws through wave field snapshots. A numerical simulation using low-frequency pulses suggested an optimal parameter strategy for CPML from the established criteria. Based on its physical nature, the CPML method of simply warping space-time was predicted to be a promising approach to achieve ideal absorption, although it was still difficult to entirely remove the dispersion. PMID:27585538
Petersen, Christian C; Mistlberger, Ralph E
2017-08-01
The mechanisms that enable mammals to time events that recur at 24-h intervals (circadian timing) and at arbitrary intervals in the seconds-to-minutes range (interval timing) are thought to be distinct at the computational and neurobiological levels. Recent evidence that disruption of circadian rhythmicity by constant light (LL) abolishes interval timing in mice challenges this assumption and suggests a critical role for circadian clocks in short interval timing. We sought to confirm and extend this finding by examining interval timing in rats in which circadian rhythmicity was disrupted by long-term exposure to LL or by chronic intake of 25% D 2 O. Adult, male Sprague-Dawley rats were housed in a light-dark (LD) cycle or in LL until free-running circadian rhythmicity was markedly disrupted or abolished. The rats were then trained and tested on 15- and 30-sec peak-interval procedures, with water restriction used to motivate task performance. Interval timing was found to be unimpaired in LL rats, but a weak circadian activity rhythm was apparently rescued by the training procedure, possibly due to binge feeding that occurred during the 15-min water access period that followed training each day. A second group of rats in LL were therefore restricted to 6 daily meals scheduled at 4-h intervals. Despite a complete absence of circadian rhythmicity in this group, interval timing was again unaffected. To eliminate all possible temporal cues, we tested a third group of rats in LL by using a pseudo-randomized schedule. Again, interval timing remained accurate. Finally, rats tested in LD received 25% D 2 O in place of drinking water. This markedly lengthened the circadian period and caused a failure of LD entrainment but did not disrupt interval timing. These results indicate that interval timing in rats is resistant to disruption by manipulations of circadian timekeeping previously shown to impair interval timing in mice.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
Graphical models for optimal power flow
Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...
2016-09-13
Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less
Ometto, Giovanni; Erlandsen, Mogens; Hunter, Andrew; Bek, Toke
2017-06-01
It has previously been shown that the intervals between screening examinations for diabetic retinopathy can be optimized by including individual risk factors for the development of the disease in the risk assessment. However, in some cases, the risk model calculating the screening interval may recommend a different interval than an experienced clinician. The purpose of this study was to evaluate the influence of factors unrelated to diabetic retinopathy and the distribution of lesions for discrepancies between decisions made by the clinician and the risk model. Therefore, fundus photographs from 90 screening examinations where the recommendations of the clinician and a risk model had been discrepant were evaluated. Forty features were defined to describe the type and location of the lesions, and classification and ranking techniques were used to assess whether the features could predict the discrepancy between the grader and the risk model. Suspicion of tumours, retinal degeneration and vascular diseases other than diabetic retinopathy could explain why the clinician recommended shorter examination intervals than the model. Additionally, the regional distribution of microaneurysms/dot haemorrhages was important for defining a photograph as belonging to the group where both the clinician and the risk model had recommended a short screening interval as opposed to the other decision alternatives. Features unrelated to diabetic retinopathy and the regional distribution of retinal lesions may affect the recommendation of the examination interval during screening for diabetic retinopathy. The development of automated computerized algorithms for extracting information about the type and location of retinal lesions could be expected to further optimize examination intervals during screening for diabetic retinopathy. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Narovlyansky, A N; Sedov, A M; Pronin, A V; Shulzhenko, A E; Sanin, A V; Zuikova, I N; Schubelko, R V; Savchenko, A Yu; Parfenova, T M; Izmestieva, A V; Izmestieva, An V; Grigorieva, E A; Suprun, O V; Zubashev, I K; Kozlov, V S
2015-01-01
Selection of optimal dosage regimen, length of treatment course (frequency of administration), safety, tolerance and clinical effectiveness evaluation of the medical preparation fortepren in patients with chronical recurrent herpes virus infection of genital localization. The medical product of antiviral and immune modulating effect--fortepren (sodium polyprenyl phosphate) as a 4 mg/ml solution for injections combined with the base course of acyclic nucleoside acyclovir, 400 mg tablets, held studies. 40 male and female patients participated in the study. After a 10-day acyclovir course (400 mg x 3 times a day) for removing the acute phase, 4 groups of 10 individuals were formed: 1--5 ml (20 mg) of fortepren i/m once at day 13 ± 2 after the start of the study after the completion of the treatment of the acute phase of the disease; 2--5 ml (20 mg) fortepren i/m 3 times at an interval of 21 days; 3--2 ml (8 mg) fortepren i/m 3 times at an interval of 21 days; 4 (control)--5 ml of placebo i/m at remission stage 3 times at an interval of 21 days. Increase of the duration of inter-recurrence period, decrease of the severity of the recurrences, state of skin and mucous damage elements, improvements of immunologic parameters were considered during effectiveness evaluation. Significant differences in the frequency of recurrences of genital herpes were shown for 3 months of observation in experimental and control groups. A significant reduction of genital herpes recurrence frequency from 3.52 ± 0.09 (before treatment) to 2.89 ± 0.08 (after treatment) was noted in patients of group 3 (p < 0.001). The frequency of recurrences in the control group was 3.84 ± 0.10, that was higher than the parameters in all the experimental groups. A significant reduction of the rash area was noted in group 3, moreover, a redution of frequency of detection of clinical manifestations of genital herpes in the form of vesicle elements after treatment in groups 2 (p = 0.02) and 3 (p = 0.005) was found. Evaluation of local symptoms has established that burning have caused minimal discomfort for patients of groups 3 and 4 and itch and soreness--of groups 1 and 3. The least pronounced exacerbations were noted in patients of group 3. Intramuscular administration of fortepren preparation was established to result in the increase of titers of leukocyte virus-induced interferon for the whole duration of treatment. An intramuscular dose of 2 ml (8 mg) at recurrence stage 3 times at an interval of 21 days after the completion of the 10-day base course of treatment of the acute phase of chronical recurrent herpes virus infection of genital localization using acyclovir was accepted as an optimal dosage regimen. Analysis of the obtained results has shown an acceptable safety profile and a good level of tolerance for fortepren preparation.
Rapid developing of Ektaspeed dental film by increase of temperature.
Fredholm, U; Julin, P
1987-01-01
Three rapid developing solutions and one standard solution were tested for contrast and fog with Ektaspeed film at temperatures ranging from 15 degrees to 30 degrees C. Temperatures below 18 degrees C were found to give extremely long developing times, more than 3 minutes with rapid developers, and were not recommended. In the interval between 21 degrees C and 24 degrees C the standard developer needed 3.5-2.5 minutes to get optimum contrast. Two rapid developers needed 1.5 minutes and the fastest 1 minute to get satisfactory contrast throughout this temperature range. A further increase of the temperature gave only a marginal time saving with the rapid solutions and was not considered worthwhile. The relation between developing time and temperature for the rapid developers had a very steep gradient below 21 degrees C, while it levelled out at room temperature. For the standard developer the time/temperature function had a more even gradient from 7.5 minutes at 15 degrees C to 1.5 minutes at 27 degrees C, i.e. an average reduction of 0.5 minute per degree. Between 27 degrees C and 30 degrees C the gradient levelled out. The fog did not increase significantly until at 30 degrees C or at more than double the optimal developing time at room temperature. Recommendations of optimal developing time of Ektaspeed film at different temperatures are given for the four tested developing solutions.
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
Chambaz, Antoine; Zheng, Wenjing; van der Laan, Mark J
2017-01-01
This article studies the targeted sequential inference of an optimal treatment rule (TR) and its mean reward in the non-exceptional case, i.e. , assuming that there is no stratum of the baseline covariates where treatment is neither beneficial nor harmful, and under a companion margin assumption. Our pivotal estimator, whose definition hinges on the targeted minimum loss estimation (TMLE) principle, actually infers the mean reward under the current estimate of the optimal TR. This data-adaptive statistical parameter is worthy of interest on its own. Our main result is a central limit theorem which enables the construction of confidence intervals on both mean rewards under the current estimate of the optimal TR and under the optimal TR itself. The asymptotic variance of the estimator takes the form of the variance of an efficient influence curve at a limiting distribution, allowing to discuss the efficiency of inference. As a by product, we also derive confidence intervals on two cumulated pseudo-regrets, a key notion in the study of bandits problems. A simulation study illustrates the procedure. One of the corner-stones of the theoretical study is a new maximal inequality for martingales with respect to the uniform entropy integral.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiang, H; Hirsch, A; Willins, J
2014-06-01
Purpose: To measure intrafractional prostate motion by time-based stereotactic x-ray imaging and investigate the impact on the accuracy and efficiency of prostate SBRT delivery. Methods: Prostate tracking log files with 1,892 x-ray image registrations from 18 SBRT fractions for 6 patients were retrospectively analyzed. Patient setup and beam delivery sessions were reviewed to identify extended periods of large prostate motion that caused delays in setup or interruptions in beam delivery. The 6D prostate motions were compared to the clinically used PTV margin of 3–5 mm (3 mm posterior, 5 mm all other directions), a hypothetical PTV margin of 2–3 mmmore » (2 mm posterior, 3 mm all other directions), and the rotation correction limits (roll ±2°, pitch ±5° and yaw ±3°) of CyberKnife to quantify beam delivery accuracy. Results: Significant incidents of treatment start delay and beam delivery interruption were observed, mostly related to large pitch rotations of ≥±5°. Optimal setup time of 5–15 minutes was recorded in 61% of the fractions, and optimal beam delivery time of 30–40 minutes in 67% of the fractions. At a default imaging interval of 15 seconds, the percentage of prostate motion beyond PTV margin of 3–5 mm varied among patients, with a mean at 12.8% (range 0.0%–31.1%); and the percentage beyond PTV margin of 2–3 mm was at a mean of 36.0% (range 3.3%–83.1%). These timely detected offsets were all corrected real-time by the robotic manipulator or by operator intervention at the time of treatment interruptions. Conclusion: The durations of patient setup and beam delivery were directly affected by the occurrence of large prostate motion. Frequent imaging of down to 15 second interval is necessary for certain patients. Techniques for reducing prostate motion, such as using endorectal balloon, can be considered to assure consistently higher accuracy and efficiency of prostate SBRT delivery.« less
Guo, Yingming; Huang, Tinglin; Wen, Gang; Cao, Xin
2015-08-01
To solve the problem of shortened backwashing intervals in groundwater plants, several disinfectants including ozone (O3), hydrogen peroxide (H2O2) and chlorine dioxide (ClO2) were examined to peel off the film from the quartz sand surface in four pilot-scale columns. An optimized oxidant dosage and oxidation time were determined by batch tests. Subsequently, the optimized conditions were tested in the four pilot-scale columns. The results demonstrated that the backwashing intervals increased from 35.17 to 54.33 (H2O2) and to 53.67 hr (ClO2) after the oxidation treatments, and the increase of backwashing interval after treatment by O3 was much less than for the other two treatments. Interestingly, the treatment efficiency of filters was not affected by O3 or H2O2 oxidation; but after oxidation by ClO2, the treatment efficiency was deteriorated, especially the ammonia removal (from 96.96% to 24.95%). The filter sands before and after the oxidation were characterized by scanning electron microscopy and X-ray photoelectron spectroscopy. Compared with the oxidation by O3 and H2O2, the structures on the surface of filter sands were seriously damaged after oxidation by ClO2. The chemical states of manganese on the surfaces of those treated sands were only changed by ClO2. The damage of the structures and the change of the chemical states of manganese might have a negative effect on the ammonia removal. In summary, H2O2 is a suitable agent for film peeling. Copyright © 2015. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Simonsen, I.; Jensen, M. H.; Johansen, A.
2002-06-01
In stochastic finance, one traditionally considers the return as a competitive measure of an asset, i.e., the profit generated by that asset after some fixed time span Δt, say one week or one year. This measures how well (or how bad) the asset performs over that given period of time. It has been established that the distribution of returns exhibits ``fat tails'' indicating that large returns occur more frequently than what is expected from standard Gaussian stochastic processes [1-3]. Instead of estimating this ``fat tail'' distribution of returns, we propose here an alternative approach, which is outlined by addressing the following question: What is the smallest time interval needed for an asset to cross a fixed return level of say 10%? For a particular asset, we refer to this time as the investment horizon and the corresponding distribution as the investment horizon distribution. This latter distribution complements that of returns and provides new and possibly crucial information for portfolio design and risk-management, as well as for pricing of more exotic options. By considering historical financial data, exemplified by the Dow Jones Industrial Average, we obtain a novel set of probability distributions for the investment horizons which can be used to estimate the optimal investment horizon for a stock or a future contract.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohta, Shinichi, E-mail: junryuhei@yahoo.co.jp; Nitta, Norihisa; Sonoda, Akinaga
2010-08-15
This study was designed to evaluate the optimal conditions for binding cisplatin and porous gelatin particles (PGPs) and to establish in vivo drug release pharmacokinetics. PGPs were immersed in cisplatin solutions under different conditions: concentration, immersion time, and temperature. Thereafter, PGPs were washed in distilled water to remove uncombined cisplatin and were then freeze-dried. The platinum concentration (PC) in the PGPs was then measured. For the in vivo release test, 50 mg/kg of the cisplatin-conjugated PGPs was implanted subcutaneously in the abdominal region of two rabbits. PCs in the blood were measured at different time intervals. PCs significantly increased inmore » direct proportion to the concentration and immersion time (p < 0.01). Although PC increased at higher solution temperature, it was not a linear progression. For the in vivo release test, platinum was released from cisplatin-conjugated PGPs after 1 day, and the peak PC was confirmed 2 days after implantation. Platinum in the blood was detected until 7 days after implantation in one rabbit and 15 days after administration in the other rabbit. Platinum binding with PGPs increased with a higher concentration of cisplatin solution at a higher temperature over a longer duration of time. Release of cisplatin from cisplatin-conjugated PGPs was confirmed in vivo.« less
A diagnostic process extended in time as a fuzzy model
NASA Astrophysics Data System (ADS)
Rakus-Andersson, Elisabeth; Gerstenkorn, Tadeusz
1999-03-01
The paper refers to earlier results obtained by the authors and constitutes their essential complement and extension by introducing to a diagnostic model the assumption that the decision concerning the diagnosis is based on observations of symptoms carried out repeatedly, by stages, which may have effect in a change of these symptoms in increasing time. The model concerns the observations of symptoms at an individual patient at a time interval. The changes of the symptoms give some additional information, sometimes very important in the diagnostic process when the clinical picture of a patient in a certain interval of time differs from that one which has been received from the beginning of the disease. It may occur that the change in the intensity of a symptom decides an acceptance of another diagnosis after some time when the patient does not feel better. The aim is to fix an optimal diagnosis on the basis of clinical symptoms typical of several morbid units with respect to the changes of these symptoms in time. In order to solve such a posed problem the authors apply the method of fuzzy relation equations which are modelled by means of logical rules of inference. Moreover, in the final decision concerning the choice of a proper diagnosis, a normed Euclidean distance is introduced as a measure between a real decision and an "ideal" decision. A simple example presents the practical action of the method to show its relevance to a possible user.
Market-based control strategy for long-span structures considering the multi-time delay issue
NASA Astrophysics Data System (ADS)
Li, Hongnan; Song, Jianzhu; Li, Gang
2017-01-01
To solve the different time delays that exist in the control device installed on spatial structures, in this study, discrete analysis using a 2 N precise algorithm was selected to solve the multi-time-delay issue for long-span structures based on the market-based control (MBC) method. The concept of interval mixed energy was introduced from computational structural mechanics and optimal control research areas, and it translates the design of the MBC multi-time-delay controller into a solution for the segment matrix. This approach transforms the serial algorithm in time to parallel computing in space, greatly improving the solving efficiency and numerical stability. The designed controller is able to consider the issue of time delay with a linear controlling force combination and is especially effective for large time-delay conditions. A numerical example of a long-span structure was selected to demonstrate the effectiveness of the presented controller, and the time delay was found to have a significant impact on the results.
Philbin, Morgan M.; Tanner, Amanda E.; DuVal, Anna; Ellen, Jonathan M.; Xu, Jiahong; Kapogiannis, Bill; Bethel, Jim; Fortenberry, J. Dennis
2016-01-01
Objective To examine how the time from HIV testing to care referral and from referral to care linkage influenced time to care engagement for newly diagnosed HIV-infected adolescents. Methods We evaluated the Care Initiative, a care linkage and engagement program for HIV-infected adolescents in 15 U.S. clinics. We analyzed client-level factors, provider type and intervals from HIV testing to care referral and from referral to care linkage as predictors of care engagement. Engagement was defined as a second HIV-related medical visit within 16 weeks of initial HIV-related medical visit (linkage). Results At 32 months, 2,143 youth had been referred. Of these, 866 were linked to care through the Care Initiative within 42 days and thus eligible for study inclusion. Of the linked youth, 90.8% were ultimately engaged in care. Time from HIV testing to referral (e.g., ≤7 days versus >365 days) was associated with engagement (AOR=2.91; 95% CI: 1.43–5.94) and shorter time to engagement (Adjusted HR=1.41; 95% CI: 1.11–1.79). Individuals with shorter care referral to linkage intervals (e.g., ≤7 days versus 22–42 days) engaged in care faster (Adjusted HR=2.90; 95% CI: 2.34–3.60) and more successfully (AOR=2.01; 95% CI: 1.04–3.89). Conclusions These data address a critical piece of the care continuum, and can offer suggestions of where and with whom to intervene in order to best achieve the care engagement goals outlined in the U.S. National HIV/AIDS Strategy. These results may also inform programs and policies that set concrete milestones and strategies for optimal care linkage timing for newly diagnosed adolescents. PMID:26885804
Bester, Rachelle; Jooste, Anna E C; Maree, Hans J; Burger, Johan T
2012-09-27
Grapevine leafroll-associated virus 3 (GLRaV-3) is the main contributing agent of leafroll disease worldwide. Four of the six GLRaV-3 variant groups known have been found in South Africa, but their individual contribution to leafroll disease is unknown. In order to study the pathogenesis of leafroll disease, a sensitive and accurate diagnostic assay is required that can detect different variant groups of GLRaV-3. In this study, a one-step real-time RT-PCR, followed by high-resolution melting (HRM) curve analysis for the simultaneous detection and identification of GLRaV-3 variants of groups I, II, III and VI, was developed. A melting point confidence interval for each variant group was calculated to include at least 90% of all melting points observed. A multiplex RT-PCR protocol was developed to these four variant groups in order to assess the efficacy of the real-time RT-PCR HRM assay. A universal primer set for GLRaV-3 targeting the heat shock protein 70 homologue (Hsp70h) gene of GLRaV-3 was designed that is able to detect GLRaV-3 variant groups I, II, III and VI and differentiate between them with high-resolution melting curve analysis. The real-time RT-PCR HRM and the multiplex RT-PCR were optimized using 121 GLRaV-3 positive samples. Due to a considerable variation in melting profile observed within each GLRaV-3 group, a confidence interval of above 90% was calculated for each variant group, based on the range and distribution of melting points. The intervals of groups I and II could not be distinguished and a 95% joint confidence interval was calculated for simultaneous detection of group I and II variants. An additional primer pair targeting GLRaV-3 ORF1a was developed that can be used in a subsequent real-time RT-PCR HRM to differentiate between variants of groups I and II. Additionally, the multiplex RT-PCR successfully validated 94.64% of the infections detected with the real-time RT-PCR HRM. The real-time RT-PCR HRM provides a sensitive, automated and rapid tool to detect and differentiate different variant groups in order to study the epidemiology of leafroll disease.
Computational problems in autoregressive moving average (ARMA) models
NASA Technical Reports Server (NTRS)
Agarwal, G. C.; Goodarzi, S. M.; Oneill, W. D.; Gottlieb, G. L.
1981-01-01
The choice of the sampling interval and the selection of the order of the model in time series analysis are considered. Band limited (up to 15 Hz) random torque perturbations are applied to the human ankle joint. The applied torque input, the angular rotation output, and the electromyographic activity using surface electrodes from the extensor and flexor muscles of the ankle joint are recorded. Autoregressive moving average models are developed. A parameter constraining technique is applied to develop more reliable models. The asymptotic behavior of the system must be taken into account during parameter optimization to develop predictive models.
NASA Astrophysics Data System (ADS)
Tamura, Yoshinobu; Yamada, Shigeru
OSS (open source software) systems which serve as key components of critical infrastructures in our social life are still ever-expanding now. Especially, embedded OSS systems have been gaining a lot of attention in the embedded system area, i.e., Android, BusyBox, TRON, etc. However, the poor handling of quality problem and customer support prohibit the progress of embedded OSS. Also, it is difficult for developers to assess the reliability and portability of embedded OSS on a single-board computer. In this paper, we propose a method of software reliability assessment based on flexible hazard rates for the embedded OSS. Also, we analyze actual data of software failure-occurrence time-intervals to show numerical examples of software reliability assessment for the embedded OSS. Moreover, we compare the proposed hazard rate model for the embedded OSS with the typical conventional hazard rate models by using the comparison criteria of goodness-of-fit. Furthermore, we discuss the optimal software release problem for the porting-phase based on the total expected software maintenance cost.
Uncertainty Analysis in 3D Equilibrium Reconstruction
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
2018-02-21
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Uncertainty Analysis in 3D Equilibrium Reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cianciosa, Mark R.; Hanson, James D.; Maurer, David A.
Reconstruction is an inverse process where a parameter space is searched to locate a set of parameters with the highest probability of describing experimental observations. Due to systematic errors and uncertainty in experimental measurements, this optimal set of parameters will contain some associated uncertainty. This uncertainty in the optimal parameters leads to uncertainty in models derived using those parameters. V3FIT is a three-dimensional (3D) equilibrium reconstruction code that propagates uncertainty from the input signals, to the reconstructed parameters, and to the final model. Here in this paper, we describe the methods used to propagate uncertainty in V3FIT. Using the resultsmore » of whole shot 3D equilibrium reconstruction of the Compact Toroidal Hybrid, this propagated uncertainty is validated against the random variation in the resulting parameters. Two different model parameterizations demonstrate how the uncertainty propagation can indicate the quality of a reconstruction. As a proxy for random sampling, the whole shot reconstruction results in a time interval that will be used to validate the propagated uncertainty from a single time slice.« less
Nonlinear feedback control for high alpha flight
NASA Technical Reports Server (NTRS)
Stalford, Harold
1990-01-01
Analytical aerodynamic models are derived from a high alpha 6 DOF wind tunnel model. One detail model requires some interpolation between nonlinear functions of alpha. One analytical model requires no interpolation and as such is a completely continuous model. Flight path optimization is conducted on the basic maneuvers: half-loop, 90 degree pitch-up, and level turn. The optimal control analysis uses the derived analytical model in the equations of motion and is based on both moment and force equations. The maximum principle solution for the half-loop is poststall trajectory performing the half-loop in 13.6 seconds. The agility induced by thrust vectoring capability provided a minimum effect on reducing the maneuver time. By means of thrust vectoring control the 90 degrees pitch-up maneuver can be executed in a small place over a short time interval. The agility capability of thrust vectoring is quite beneficial for pitch-up maneuvers. The level turn results are based currently on only outer layer solutions of singular perturbation. Poststall solutions provide high turn rates but generate higher losses of energy than that of classical sustained solutions.
Optimistic outlook regarding maternity protects against depressive symptoms postpartum.
Robakis, Thalia K; Williams, Katherine E; Crowe, Susan; Kenna, Heather; Gannon, Jamie; Rasgon, Natalie L
2015-04-01
The transition to motherhood is a time of elevated risk for clinical depression. Dispositional optimism may be protective against depressive symptoms; however, the arrival of a newborn presents numerous challenges that may be at odds with initially positive expectations, and which may contribute to depressed mood. We have explored the relative contributions of antenatal and postnatal optimism regarding maternity to depressive symptoms in the postnatal period. Ninety-eight pregnant women underwent clinician interview in the third trimester to record psychiatric history, antenatal depressive symptoms, and administer a novel measure of optimism towards maternity. Measures of depressive symptoms, attitudes to maternity, and mother-to-infant bonding were obtained from 97 study completers at monthly intervals through 3 months postpartum. We found a positive effect of antenatal optimism, and a negative effect of postnatal disconfirmation of expectations, on depressive mood postnatally. Postnatal disconfirmation, but not antenatal optimism, was associated with more negative attitudes toward maternity postnatally. Antenatal optimism, but not postnatal disconfirmation, was associated with reduced scores on a mother-to-infant bonding measure. The relationships between antenatal optimism, postnatal disconfirmation of expectations, and postnatal depression held true among primigravidas and multigravidas, as well as among women with prior histories of mood disorders, although antenatal optimism tended to be lower among women with mental health histories. We conclude that cautious antenatal optimism, rather than immoderate optimism or frank pessimism, is the approach that is most protective against postnatal depressive symptoms, and that this is true irrespective of either mood disorder history or parity. Factors predisposing to negative cognitive assessments and impaired mother-to-infant bonding may be substantially different than those associated with depressive symptoms, a finding that merits further study.
Optimistic Outlook Regarding Maternity Protects Against Depressive Symptoms Postpartum
Robakis, Thalia K.; Williams, Katherine E.; Crowe, Susan; Kenna, Heather; Gannon, Jamie; Rasgon, Natalie L.
2016-01-01
Purpose The transition to motherhood is a time of elevated risk for clinical depression. Dispositional optimism may be protective against depressive symptoms; however the arrival of a newborn presents numerous challenges that may be at odds with initially positive expectations, and which may contribute to depressed mood. We have explored the relative contributions of antenatal and postnatal optimism regarding maternity to depressive symptoms in the postnatal period. Methods 98 pregnant women underwent clinician interview in the third trimester to record psychiatric history, antenatal depressive symptoms, and administer a novel measure of optimism towards maternity. Measures of depressive symptoms, attitudes to maternity, and mother-to-infant bonding were obtained from 97 study completers at monthly intervals through three months postpartum. Results We found a positive effect of antenatal optimism, and a negative effect of postnatal disconfirmation of expectations, on depressive mood postnatally. Postnatal disconfirmation, but not antenatal optimism, was associated with more negative attitudes toward maternity postnatally. Antenatal optimism, but not postnatal disconfirmation, was associated with reduced scores on a mother-to-infant bonding measure. The relationships between antenatal optimism, postnatal disconfirmation of expectations, and postnatal depression held true among primigravidas and multigravidas, as well as among women with prior histories of mood disorders, although antenatal optimism tended to be lower among women with mental health histories. Conclusions We conclude that cautious antenatal optimism, rather than immoderate optimism or frank pessimism, is the approach that is most protective against postnatal depressive symptoms, and that this is true irrespective of either mood disorder history or parity. Factors predisposing to negative cognitive assessments and impaired mother-to-infant bonding may be substantially different than those associated with depressive symptoms, a finding that merits further study. PMID:25088532
Effect of provider volume on resource utilization for surgical procedures of the knee.
Jain, Nitin; Pietrobon, Ricardo; Guller, Ulrich; Shankar, Anoop; Ahluwalia, Ajit S; Higgins, Laurence D
2005-05-01
Operating-room time and patient disposition on discharge are important determinants of healthcare resource utilization and cost. We examined the relation between these determinants and hospital/surgeon volume for anterior cruciate ligament (ACL) reconstruction and meniscectomy procedures. Patients undergoing ACL reconstruction (18,390 cases) and meniscectomy (123,012 cases) were extracted from the State Ambulatory Surgery Databases for the years 1997-2000. Surgeon and hospital volume were divided into low-, intermediate-, and high-volume categories. Multivariate logistic regression models were used to estimate the adjusted association between surgeon and hospital volume and patient discharge status and operating-room time. Patients undergoing ACL reconstruction or meniscectomy performed by low-volume surgeons were significantly more likely to be non-routinely discharged as compared to high-volume surgeons (adjusted odds ratio 3.5, 95% confidence interval 1.7-7.2 for ACL reconstruction; adjusted odds ratio 2.0, 95% confidence interval 1.6-2.3 for meniscectomy). The mean operating-room time for performing ACL reconstruction or meniscectomy was significantly higher in low- and intermediate-volume surgeons and hospitals as compared to high-volume surgeons and hospitals (p < or = 0.001). High-volume providers utilize healthcare resources more efficiently. Our findings may help surgeons and hospitals in optimizing resource utilization and cost for routinely-performed ambulatory surgery procedures.
Chodosh, Sanford
2005-06-01
Rational and appropriate antibiotic use for patients with acute exacerbation of chronic bronchitis (AECB) is a major concern, as approximately half of these patients do not have a bacterial infection. Typically, the result of antimicrobial therapy for patients with acute bacterial exacerbation of chronic bronchitis (ABECB) is not eradication of the pathogen but resolution of the acute symptoms. However, the length of time before the next bacterial exacerbation can be another important variable, as the frequency of exacerbations will affect the overall health of the patient and the rate of lung deterioration over time. Clinical trials comparing antimicrobial therapies commonly measure resolution of symptoms in AECB patients as the primary end point, regardless of whether the exacerbation is documented as bacterial in nature. Ideally, the scientific approach to assessing the efficacy of antibiotic therapy for ABECB should include a measurement of acute bacterial eradication rates in patients with documented bronchial bacterial infection followed by measurement of the infection-free interval (IFI), ie, the time to the next ABECB. The use of these variables can provide a standard for comparing various antimicrobial therapies. As we learn more about how antibiotics can affect the IFI, treatment decisions should be adapted to ensure optimal management of ABECB for the long-term.
Deblurring for spatial and temporal varying motion with optical computing
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Xue, Dongfeng; Hui, Zhao
2016-05-01
A way to estimate and remove spatially and temporally varying motion blur is proposed, which is based on an optical computing system. The translation and rotation motion can be independently estimated from the joint transform correlator (JTC) system without iterative optimization. The inspiration comes from the fact that the JTC system is immune to rotation motion in a Cartesian coordinate system. The work scheme of the JTC system is designed to keep switching between the Cartesian coordinate system and polar coordinate system in different time intervals with the ping-pang handover. In the ping interval, the JTC system works in the Cartesian coordinate system to obtain a translation motion vector with optical computing speed. In the pang interval, the JTC system works in the polar coordinate system. The rotation motion is transformed to the translation motion through coordinate transformation. Then the rotation motion vector can also be obtained from JTC instantaneously. To deal with continuous spatially variant motion blur, submotion vectors based on the projective motion path blur model are proposed. The submotion vectors model is more effective and accurate at modeling spatially variant motion blur than conventional methods. The simulation and real experiment results demonstrate its overall effectiveness.
Nilsson, Birgitta Blakstad; Westheim, Arne; Risberg, May Arna; Arnesen, Harald; Seljeflot, Ingebjørg
2010-08-01
Exercise training might improve cardiac function as well as functional capacity in patients with chronic heart failure (CHF). N-terminal pro-B-type natriuretic peptide (NT pro-BNP), is associated with the severity of the disease, and has been reported to be an independent predictor of outcome in CHF. We evaluated the effect of a four months group-based aerobic interval training program on circulating levels of NT pro-BNP in patients with CHF. We have previously reported improved functional capacity in 80 patients after exercise in this exercise program. Seventy-eight patients with stable CHF (21% women; 70+/-8 years; left ventricular ejection fraction 30+/-8.6%) on optimal medical treatment were randomized either to interval training (n=39), or to a control group (n=39). Circulating levels of NT pro-BNP, a six minute walk test (6MWT) and cycle ergometer test were evaluated at baseline, post exercise, and further after 12 months. There were no significant differences in NT pro-BNP levels from baseline to either post exercise or long-term follow-up between or within the groups. Inverse correlations were observed between NT pro-BNP and 6MWT (r=-0.24, p=0.035) and cycle exercise time (r=-0.48, p<0.001) at baseline. But no significant correlations were observed between change in NT pro-BNP and change in functional capacity (6MWT; r=0.12, p=0.33, cycle exercise time; r=0.04, p=0.72). No significant changes in NT pro-BNP levels were observed after interval training, despite significant improvement of functional capacity.
Provocative issues in heart disease prevention.
Juneau, Martin; Hayami, Douglas; Gayda, Mathieu; Lacroix, Sébastien; Nigam, Anil
2014-12-01
In this article, new areas of cardiovascular (CV) prevention and rehabilitation research are discussed: high-intensity interval training (HIIT) and new concepts in nutrition. HIIT consists of brief periods of high-intensity exercise interspersed by periods of low-intensity exercise or rest. The optimal mode according our work (15-second exercise intervals at peak power with passive recovery intervals of the same duration) is associated with longer total exercise time, similar time spent near peak oxygen uptake (VO2 peak) VO2 peak, and lesser perceived exertion relative to other protocols that use longer intervals and active recovery periods. Evidence also suggests that compared with moderate-intensity continuous exercise training, HIIT has superior effects on cardiorespiratory function and on the attenuation of multiple cardiac and peripheral abnormalities. With respect to nutrition, a growing body of evidence suggests that the gut microbiota is influenced by lifestyle choices and might play a pivotal role in modulating CV disease development. For example, recent evidence linking processed (but not unprocessed) meats to increased CV risk pointed to the gut microbial metabolite trimethylamine N-oxide as a potential culprit. In addition, altered gut microbiota could also mediate the proinflammatory and cardiometabolic abnormalities associated with excess added free sugar consumption, and in particular high-fructose corn syrup. Substantially more research is required, however, to fully understand how and which alterations in gut flora can prevent or lead to CV disease and other chronic illnesses. We conclude with thoughts about the appropriate role for HIIT in CV training and future research in the role of gut flora-directed interventions in CV prevention. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Dikpati, Mausumi; Anderson, Jeffrey L.; Mitra, Dhrubaditya
2016-09-01
We implement an Ensemble Kalman Filter procedure using the Data Assimilation Research Testbed for assimilating “synthetic” meridional flow-speed data in a Babcock-Leighton-type flux-transport solar dynamo model. By performing several “observing system simulation experiments,” we reconstruct time variation in meridional flow speed and analyze sensitivity and robustness of reconstruction. Using 192 ensemble members including 10 observations, each with 4% error, we find that flow speed is reconstructed best if observations of near-surface poloidal fields from low latitudes and tachocline toroidal fields from midlatitudes are assimilated. If observations include a mixture of poloidal and toroidal fields from different latitude locations, reconstruction is reasonably good for ≤slant 40 % error in low-latitude data, even if observational error in polar region data becomes 200%, but deteriorates when observational error increases in low- and midlatitude data. Solar polar region observations are known to contain larger errors than those in low latitudes; our forward operator (a flux-transport dynamo model here) can sustain larger errors in polar region data, but is more sensitive to errors in low-latitude data. An optimal reconstruction is obtained if an assimilation interval of 15 days is used; 10- and 20-day assimilation intervals also give reasonably good results. Assimilation intervals \\lt 5 days do not produce faithful reconstructions of flow speed, because the system requires a minimum time to develop dynamics to respond to flow variations. Reconstruction also deteriorates if an assimilation interval \\gt 45 days is used, because the system’s inherent memory interferes with its short-term dynamics during a substantially long run without updating.
Knowledge-based nonuniform sampling in multidimensional NMR.
Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C
2011-07-01
The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.
High-intensity interval training: Modulating interval duration in overweight/obese men.
Smith-Ryan, Abbie E; Melvin, Malia N; Wingfield, Hailee L
2015-05-01
High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Twenty-five men [body mass index (BMI) > 25 kg · m(2)] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%-100%) (2MIN-HIIT). There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg(-1) · min(-1)) or 2MIN groups (2.7 ml · kg(-1) · min(-1)). IN sensitivity (HOMA-IR) improved for both training groups (Δ-2.78 ± 3.48 units; p < 0.05) compared to CON. HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males.
High-intensity interval training: Modulating interval duration in overweight/obese men
Smith-Ryan, Abbie E.; Melvin, Malia N.; Wingfield, Hailee L.
2015-01-01
Introduction High-intensity interval training (HIIT) is a time-efficient strategy shown to induce various cardiovascular and metabolic adaptations. Little is known about the optimal tolerable combination of intensity and volume necessary for adaptations, especially in clinical populations. Objectives In a randomized controlled pilot design, we evaluated the effects of two types of interval training protocols, varying in intensity and interval duration, on clinical outcomes in overweight/obese men. Methods Twenty-five men [body mass index (BMI) > 25 kg·m2] completed baseline body composition measures: fat mass (FM), lean mass (LM) and percent body fat (%BF) and fasting blood glucose, lipids and insulin (IN). A graded exercise cycling test was completed for peak oxygen consumption (VO2peak) and power output (PO). Participants were randomly assigned to high-intensity short interval (1MIN-HIIT), high-intensity interval (2MIN-HIIT) or control groups. 1MIN-HIIT and 2MIN-HIIT completed 3 weeks of cycling interval training, 3 days/week, consisting of either 10 × 1 min bouts at 90% PO with 1 min rests (1MIN-HIIT) or 5 × 2 min bouts with 1 min rests at undulating intensities (80%–100%) (2MIN-HIIT). Results There were no significant training effects on FM (Δ1.06 ± 1.25 kg) or %BF (Δ1.13% ± 1.88%), compared to CON. Increases in LM were not significant but increased by 1.7 kg and 2.1 kg for 1MIN and 2MIN-HIIT groups, respectively. Increases in VO2peak were also not significant for 1MIN (3.4 ml·kg−1·min−1) or 2MIN groups (2.7 ml·kg−1·min−1). IN sensitivity (HOMA-IR) improved for both training groups (Δ −2.78 ± 3.48 units; p < 0.05) compared to CON. Conclusion HIIT may be an effective short-term strategy to improve cardiorespiratory fitness and IN sensitivity in overweight males. PMID:25913937
Average variograms to guide soil sampling
NASA Astrophysics Data System (ADS)
Kerry, R.; Oliver, M. A.
2004-10-01
To manage land in a site-specific way for agriculture requires detailed maps of the variation in the soil properties of interest. To predict accurately for mapping, the interval at which the soil is sampled should relate to the scale of spatial variation. A variogram can be used to guide sampling in two ways. A sampling interval of less than half the range of spatial dependence can be used, or the variogram can be used with the kriging equations to determine an optimal sampling interval to achieve a given tolerable error. A variogram might not be available for the site, but if the variograms of several soil properties were available on a similar parent material and or particular topographic positions an average variogram could be calculated from these. Averages of the variogram ranges and standardized average variograms from four different parent materials in southern England were used to suggest suitable sampling intervals for future surveys in similar pedological settings based on half the variogram range. The standardized average variograms were also used to determine optimal sampling intervals using the kriging equations. Similar sampling intervals were suggested by each method and the maps of predictions based on data at different grid spacings were evaluated for the different parent materials. Variograms of loss on ignition (LOI) taken from the literature for other sites in southern England with similar parent materials had ranges close to the average for a given parent material showing the possible wider application of such averages to guide sampling.
Maksiutov, R A; Shchelkunov, S N
2011-01-01
Efficacy of candidate DNA-vaccines based on the variola virus natural gene A30L and artificial gene A30Lopt with modified codon usage, optimized for expression in mammalian cells, was tested. The groups of mice were intracutaneously immunized three times with three-week intervals with candidate DNA-vaccines: pcDNA_A30L or pcDNA_A30Lopt, and in three weeks after the last immunization all mice in the groups were intraperitoneally infected by the ectromelia virus K1 strain in 10 LD50 dose for the estimation of protection. It was shown that the DNA-vaccines based on natural gene A30L and codon-optimized gene A30Lopt elicited virus, thereby neutralizing the antibody response and protected mice from lethal intraperitoneal challenge with the ectromelia virus with lack of statistically significant difference.
Optimal Strategies for Probing Terrestrial Exoplanet Atmospheres with JWST
NASA Astrophysics Data System (ADS)
Batalha, Natasha E.; Lewis, Nikole K.; Line, Michael
2018-01-01
It is imperative that the exoplanet community determines the feasibility and the resources needed to yield high fidelity atmospheric compositions from terrestrial exoplanets. In particular, LHS 1140b and the TRAPPIST-1 system, already slated for observations by JWST’s Guaranteed Time Observers, will be the first two terrestrial planets observed by JWST. I will discuss optimal observing strategies for observing these two systems, focusing on the NIRSpec Prism (1-5μm) and the combination of NIRISS SOSS (1-2.7μm) and NIRSpec G395H (3-5μm). I will also introduce currently unsupported JWST readmodes that have the potential to greatly increase the precision on our atmospheric spectra. Lastly, I will use information content theory to compute the expected confidence interval on the retrieved abundances of key molecular species and temperature profiles as a function of JWST observing cycles.
Physics of cardiac imaging with multiple-row detector CT.
Mahesh, Mahadevappa; Cody, Dianna D
2007-01-01
Cardiac imaging with multiple-row detector computed tomography (CT) has become possible due to rapid advances in CT technologies. Images with high temporal and spatial resolution can be obtained with multiple-row detector CT scanners; however, the radiation dose associated with cardiac imaging is high. Understanding the physics of cardiac imaging with multiple-row detector CT scanners allows optimization of cardiac CT protocols in terms of image quality and radiation dose. Knowledge of the trade-offs between various scan parameters that affect image quality--such as temporal resolution, spatial resolution, and pitch--is the key to optimized cardiac CT protocols, which can minimize the radiation risks associated with these studies. Factors affecting temporal resolution include gantry rotation time, acquisition mode, and reconstruction method; factors affecting spatial resolution include detector size and reconstruction interval. Cardiac CT has the potential to become a reliable tool for noninvasive diagnosis and prevention of cardiac and coronary artery disease. (c) RSNA, 2007.
Dynamic remapping of parallel computations with varying resource demands
NASA Technical Reports Server (NTRS)
Nicol, D. M.; Saltz, J. H.
1986-01-01
A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.
Effects of age and recovery duration on peak power output during repeated cycling sprints.
Ratel, S; Bedu, M; Hennegrave, A; Doré, E; Duché, P
2002-08-01
The aim of the present study was to investigate the effects of age and recovery duration on the time course of cycling peak power and blood lactate concentration ([La]) during repeated bouts of short-term high-intensity exercise. Eleven prepubescent boys (9.6 +/- 0.7 yr), nine pubescent boys (15.0 +/- 0.7 yr) and ten men (20.4 +/- 0.8 yr) performed ten consecutive 10 s cycling sprints separated by either 30 s (R30), 1 min (R1), or 5 min (R5) passive recovery intervals against a friction load corresponding to 50 % of their optimal force (50 % Ffopt). Peak power produced at 50 % Ffopt (PP50) was calculated at each sprint including the flywheel inertia of the bicycle. Arterialized capillary blood samples were collected at rest and during the sprint exercises to measure the time course of [La]. In the prepubescent boys, whatever recovery intervals, PP50 remained unchanged during the ten 10 s sprint exercises. In the pubescent boys, PP50 decreased significantly by 18.5 % (p < 0.001) with R30 and by 15.3 % (p < 0.01) with R1 from the first to the tenth sprint but remained unchanged with R5. In the men, PP50 decreased respectively by 28.5 % (p < 0.001) and 11.3 % (p < 0.01) with R30 and R1 and slightly diminished with R5. For each recovery interval, the increase in blood [La] over the ten sprints was significantly lower in the prepubescent boys compared with the pubescent boys and the men. To conclude, the prepubescent boys sustained their PP50 during the ten 10 s sprint exercises with only 30 s recovery intervals. In contrast, the pubescent boys and the men needed 5 min recovery intervals. It was suggested that the faster recovery of PP50 in the prepubescent boys was due to their lower muscle glycolytic activity and their higher muscle oxidative capacity allowing a faster resynthesis in phosphocreatine.
NASA Technical Reports Server (NTRS)
Gilyard, Glenn B. (Inventor)
1999-01-01
Practical application of real-time (or near real-time) Adaptive Performance Optimization (APO) is provided for a transport aircraft in steady climb, cruise, turn descent or other flight conditions based on measurements and calculations of incremental drag from a forced response maneuver of one or more redundant control effectors defined as those in excess of the minimum set of control effectors required to maintain the steady flight condition in progress. The method comprises the steps of applying excitation in a raised-cosine form over an interval of from 100 to 500 sec. at the rate of 1 to 10 sets/sec of excitation, and data for analysis is gathered in sets of measurements made during the excitation to calculate lift and drag coefficients C.sub.L and C.sub.D from two equations, one for each coefficient. A third equation is an expansion of C.sub.D as a function of parasitic drag, induced drag, Mach and altitude drag effects, and control effector drag, and assumes a quadratic variation of drag with positions .delta..sub.i of redundant control effectors i=1 to n. The third equation is then solved for .delta..sub.iopt the optimal position of redundant control effector i, which is then used to set the control effector i for optimum performance during the remainder of said steady flight or until monitored flight conditions change by some predetermined amount as determined automatically or a predetermined minimum flight time has elapsed.
NASA Astrophysics Data System (ADS)
Gordon, Devin A.; DeNoyer, Lin; Meyer, Corey W.; Sweet, Noah W.; Burns, David M.; Bruckman, Laura S.; French, Roger H.
2017-08-01
Poly(ethylene-terephthalate) (PET) film is widely used in photovoltaic module backsheets for its dielectric break- down strength, and in applications requiring high optical clarity for its high transmission in the visible region. However, PET degrades and loses optical clarity under exposure to ultraviolet (UV) irradiance, heat, and moisture. Stabilizers are often included in PET formulation to increase its longevity; however, even these are subject to degradation and further reduce optical clarity. To study the weathering induced changes in the optical properties in PET films, samples of a UV-stabilized grade of PET were exposed to heat, moisture, and UV irradiance as prescribed by ASTM-G154 Cycle 4 for 168 hour time intervals. UV-Vis reflection and transmission spectra were collected via Multi-Angle, Polarization-Dependent, Reflection, Transmission, and Scattering (MaPd:RTS) spectroscopy after each exposure interval. The resulting spectra were used to calculate the complex index of refraction throughout the UV-Vis spectral region via an iterative optimization process based upon the Fresnel equations. The index of refraction and extinction coefficient were found to vary throughout the UV-Vis region with time under exposure. The spectra were also used to investigate changes in light scattering behavior with increasing exposure time. The intensity of scattered light was found to increase at higher angles with time under exposure.
Perfect Detection of Spikes in the Linear Sub-threshold Dynamics of Point Neurons
Krishnan, Jeyashree; Porta Mana, PierGianLuca; Helias, Moritz; Diesmann, Markus; Di Napoli, Edoardo
2018-01-01
Spiking neuronal networks are usually simulated with one of three main schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work offers an alternative geometric point of view on neuronal dynamics, and derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but can be optimized in several ways. Comparison confirms earlier results that the imperfect tests rarely miss spikes (less than a fraction 1/108 of missed spikes) in biologically relevant settings. PMID:29379430
Modelling and multi-parametric control for delivery of anaesthetic agents.
Dua, Pinky; Dua, Vivek; Pistikopoulos, Efstratios N
2010-06-01
This article presents model predictive controllers (MPCs) and multi-parametric model-based controllers for delivery of anaesthetic agents. The MPC can take into account constraints on drug delivery rates and state of the patient but requires solving an optimization problem at regular time intervals. The multi-parametric controller has all the advantages of the MPC and does not require repetitive solution of optimization problem for its implementation. This is achieved by obtaining the optimal drug delivery rates as a set of explicit functions of the state of the patient. The derivation of the controllers relies on using detailed models of the system. A compartmental model for the delivery of three drugs for anaesthesia is developed. The key feature of this model is that mean arterial pressure, cardiac output and unconsciousness of the patient can be simultaneously regulated. This is achieved by using three drugs: dopamine (DP), sodium nitroprusside (SNP) and isoflurane. A number of dynamic simulation experiments are carried out for the validation of the model. The model is then used for the design of model predictive and multi-parametric controllers, and the performance of the controllers is analyzed.
Hasanvand, Hamed; Mozafari, Babak; Arvan, Mohammad R; Amraee, Turaj
2015-11-01
This paper addresses the application of a static Var compensator (SVC) to improve the damping of interarea oscillations. Optimal location and size of SVC are defined using bifurcation and modal analysis to satisfy its primary application. Furthermore, the best-input signal for damping controller is selected using Hankel singular values and right half plane-zeros. The proposed approach is aimed to design a robust PI controller based on interval plants and Kharitonov's theorem. The objective here is to determine the stability region to attain robust stability, the desired phase margin, gain margin, and bandwidth. The intersection of the resulting stability regions yields the set of kp-ki parameters. In addition, optimal multiobjective design of PI controller using particle swarm optimization (PSO) algorithm is presented. The effectiveness of the suggested controllers in damping of local and interarea oscillation modes of a multimachine power system, over a wide range of loading conditions and system configurations, is confirmed through eigenvalue analysis and nonlinear time domain simulation. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
In-Flight Pitot-Static Calibration
NASA Technical Reports Server (NTRS)
Foster, John V. (Inventor); Cunningham, Kevin (Inventor)
2016-01-01
A GPS-based pitot-static calibration system uses global output-error optimization. High data rate measurements of static and total pressure, ambient air conditions, and GPS-based ground speed measurements are used to compute pitot-static pressure errors over a range of airspeed. System identification methods rapidly compute optimal pressure error models with defined confidence intervals.
Effect of aspirin in pregnant women is dependent on increase in bleeding time.
Dumont, A; Flahault, A; Beaufils, M; Verdy, E; Uzan, S
1999-01-01
Randomized trials with low-dose aspirin to prevent preeclampsia and intrauterine growth restriction have yielded conflicting results. In particular, 3 recent large trials were not conclusive. Study designs, however, varied greatly regarding selection of patients, dose of aspirin, and timing of treatment, all of which can be determinants of the results. Retrospectively analyzing the conditions associated with failure or success of aspirin may therefore help to draw up new hypotheses and prepare for more specific randomized trials. We studied a historical cohort of 187 pregnant women who were considered at high risk for preeclampsia, intrauterine growth restriction, or both and were therefore treated with low-dose aspirin between 1989 and 1994. Various epidemiologic, clinical, and laboratory data were extracted from the files. Univariate and multivariate analyses were performed to search for independent parameters associated with the outcome of pregnancy. Age, parity, weight, height, and race had no influence on the outcome. The success rate was higher when treatment was given because of previous poor pregnancy outcomes than when it was given for other indications, and the patients with successful therapy had started aspirin earlier than had those with therapy failure (17.7 vs 20.0 weeks' gestation, P =.04). After multivariate analysis an increase in Ivy bleeding time after 10 days of treatment by >2 minutes was an independent predictor of a better outcome (odds ratio 0.22, 95% confidence interval 0.09-0.51). Borderline statistical significance was observed for aspirin initiation before 17 weeks' gestation (odds ratio 0.44, 95% confidence interval 0.18-1. 08). Abnormal uterine artery Doppler velocimetric scan at 20-24 weeks' gestation (odds ratio 3.31, 95% confidence interval 1.41-7.7), abnormal umbilical artery Doppler velocimetric scan after 26 weeks' gestation (odds ratio 37.6, 95% confidence interval 3.96-357), and use of antihypertensive therapy (odds ratio 6.06, 95% confidence interval 2.45-15) were independent predictors of poor outcome. Efficacy of aspirin seems optimal when bleeding time increases >/=2 minutes with treatment, indicating a more powerful antiplatelet effect. This suggests that the dose of aspirin should be adjusted according to a biologic marker of the antiplatelet effect. A prospective trial is warranted to test this hypothesis.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Tosun, Tuğçe; Gür, Ezgi; Balcı, Fuat
2016-01-01
Animals can shape their timed behaviors based on experienced probabilistic relations in a nearly optimal fashion. On the other hand, it is not clear if they adopt these timed decisions by making computations based on previously learnt task parameters (time intervals, locations, and probabilities) or if they gradually develop their decisions based on trial and error. To address this question, we tested mice in the timed-switching task, which required them to anticipate when (after a short or long delay) and at which of the two delay locations a reward would be presented. The probability of short trials differed between test groups in two experiments. Critically, we first trained mice on relevant task parameters by signaling the active trial with a discriminative stimulus and delivered the corresponding reward after the associated delay without any response requirement (without inducing switching behavior). During the test phase, both options were presented simultaneously to characterize the emergence and temporal characteristics of the switching behavior. Mice exhibited timed-switching behavior starting from the first few test trials, and their performance remained stable throughout testing in the majority of the conditions. Furthermore, as the probability of the short trial increased, mice waited longer before switching from the short to long location (experiment 1). These behavioral adjustments were in directions predicted by reward maximization. These results suggest that rather than gradually adjusting their time-dependent choice behavior, mice abruptly adopted temporal decision strategies by directly integrating their previous knowledge of task parameters into their timed behavior, supporting the model-based representational account of temporal risk assessment. PMID:26733674
Moore, Myles N; Schultz, Martin G; Nelson, Mark R; Black, J Andrew; Dwyer, Nathan B; Hoban, Ella; Jose, Matthew D; Kosmala, Wojciech; Przewlocka-Kosmala, Monika; Zachwyc, Jowita; Otahal, Petr; Picone, Dean S; Roberts-Thomson, Philip; Veloudi, Panagiota; Sharman, James E
2018-02-09
Automated office blood pressure (AOBP) involving repeated, unobserved blood pressure (BP) readings during one clinic visit is recommended for in-office diagnosis and assessment of hypertension. However, the optimal AOBP protocol to determine BP control in the least amount of time with the fewest BP readings is yet to be determined and was the aim of this study. One hundred and eighty-nine patients (mean age 62.8 ± 12.1 years; 50.3% female) with treated hypertension referred to specialist clinics at 2 sites underwent AOBP in a quiet room alone. Eight BP measurements were taken starting immediately after sitting and then at 2-minute intervals (15 minutes total). The optimal AOBP protocol was defined by the smallest mean difference and highest intraclass correlation coefficient (ICC) compared with daytime ambulatory BP (ABP). The same BP device (Mobil-o-graph, IEM) was used for both AOBP and daytime ABP. Average 15-minute AOBP and daytime ABP were 134 ± 22/82 ± 13 and 137 ± 17/83 ± 11 mm Hg, respectively. The optimal AOBP protocol was derived within a total duration of 6 minutes from the average of 2 measures started after 2 and 4 minutes of seated rest (systolic BP: mean difference (95% confidence interval) 0.004(-2.21, 2.21) mm Hg, P = 1.0; ICC = 0.81; diastolic BP: mean difference 0.37(-0.90, 1.63) mm Hg, P = 0.57; ICC = 0.86). AOBP measures taken after 8 minutes tended to underestimate daytime ABP (whether as a single BP or the average of more than 1 BP reading). Only 2 AOBP readings taken over 6 minutes (excluding an initial reading immediately after sitting) may be needed to be comparable with daytime ABP. © American Journal of Hypertension, Ltd 2017. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Carter, Michael J; Ste-Marie, Diane M
2017-03-01
The learning advantages of self-controlled knowledge-of-results (KR) schedules compared to yoked schedules have been linked to the optimization of the informational value of the KR received for the enhancement of one's error-detection capabilities. This suggests that information-processing activities that occur after motor execution, but prior to receiving KR (i.e., the KR-delay interval) may underlie self-controlled KR learning advantages. The present experiment investigated whether self-controlled KR learning benefits would be eliminated if an interpolated activity was performed during the KR-delay interval. Participants practiced a waveform matching task that required two rapid elbow extension-flexion reversals in one of four groups using a factorial combination of choice (self-controlled, yoked) and KR-delay interval (empty, interpolated). The waveform had specific spatial and temporal constraints, and an overall movement time goal. The results indicated that the self-controlled + empty group had superior retention and transfer scores compared to all other groups. Moreover, the self-controlled + interpolated and yoked + interpolated groups did not differ significantly in retention and transfer; thus, the interpolated activity eliminated the typically found learning benefits of self-controlled KR. No significant differences were found between the two yoked groups. We suggest the interpolated activity interfered with information-processing activities specific to self-controlled KR conditions that occur during the KR-delay interval and that these activities are vital for reaping the associated learning benefits. These findings add to the growing evidence that challenge the motivational account of self-controlled KR learning advantages and instead highlights informational factors associated with the KR-delay interval as an important variable for motor learning under self-controlled KR schedules.
Measurement of cardiac output from dynamic pulmonary circulation time CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yee, Seonghwan, E-mail: Seonghwan.Yee@Beaumont.edu; Scalzetti, Ernest M.
Purpose: To introduce a method of estimating cardiac output from the dynamic pulmonary circulation time CT that is primarily used to determine the optimal time window of CT pulmonary angiography (CTPA). Methods: Dynamic pulmonary circulation time CT series, acquired for eight patients, were retrospectively analyzed. The dynamic CT series was acquired, prior to the main CTPA, in cine mode (1 frame/s) for a single slice at the level of the main pulmonary artery covering the cross sections of ascending aorta (AA) and descending aorta (DA) during the infusion of iodinated contrast. The time series of contrast changes obtained for DA,more » which is the downstream of AA, was assumed to be related to the time series for AA by the convolution with a delay function. The delay time constant in the delay function, representing the average time interval between the cross sections of AA and DA, was determined by least square error fitting between the convoluted AA time series and the DA time series. The cardiac output was then calculated by dividing the volume of the aortic arch between the cross sections of AA and DA (estimated from the single slice CT image) by the average time interval, and multiplying the result by a correction factor. Results: The mean cardiac output value for the six patients was 5.11 (l/min) (with a standard deviation of 1.57 l/min), which is in good agreement with the literature value; the data for the other two patients were too noisy for processing. Conclusions: The dynamic single-slice pulmonary circulation time CT series also can be used to estimate cardiac output.« less
Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann
2013-06-01
Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.
Owen, Lauren; Scholey, Andrew B; Finnegan, Yvonne; Hu, Henglong; Sünram-Lea, Sandra I
2012-04-01
Previous research has identified a number of factors that appear to moderate the behavioural response to glucose administration. These include physiological state, dose, types of cognitive tasks used and level of cognitive demand. Another potential moderating factor is the length of the fasting interval prior to a glucose load. Therefore, we aimed to examine the effect of glucose dose and fasting interval on mood and cognitive function. The current study utilised a double-blind, placebo-controlled, balanced, six period crossover design to examine potential interactions between length of fasting interval (2 versus 12 hours) and optimal dose for cognition enhancement. Results demonstrated that the higher dose (60 g) increased working memory performance following an overnight fast, whereas the lower dose (25 g) enhanced working memory performance following a 2-h fast. The data suggest that optimal glucose dosage may differ under different conditions of depleted blood glucose resources. In addition, glucoregulation was observed to be a moderating factor. However, further research is needed to develop a model of the moderating and mediating factors under which glucose facilitation is best achieved.
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans
2015-02-01
The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this relative improvement decreases with increasing number of sample points and input parameter dimensions. Since the computational time and efforts for generating the sample designs in the two approaches are identical, the use of midpoint LHS as the initial design in OLHS is thus recommended.
NASA Astrophysics Data System (ADS)
Fu, Z. H.; Zhao, H. J.; Wang, H.; Lu, W. T.; Wang, J.; Guo, H. C.
2017-11-01
Economic restructuring, water resources management, population planning and environmental protection are subjects to inner uncertainties of a compound system with objectives which are competitive alternatives. Optimization model and water quality model are usually used to solve problems in a certain aspect. To overcome the uncertainty and coupling in reginal planning management, an interval fuzzy program combined with water quality model for regional planning and management has been developed to obtain the absolutely ;optimal; solution in this study. The model is a hybrid methodology of interval parameter programming (IPP), fuzzy programing (FP), and a general one-dimensional water quality model. The method extends on the traditional interval parameter fuzzy programming method by integrating water quality model into the optimization framework. Meanwhile, as an abstract concept, water resources carrying capacity has been transformed into specific and calculable index. Besides, unlike many of the past studies about water resource management, population as a significant factor has been considered. The results suggested that the methodology was applicable for reflecting the complexities of the regional planning and management systems within the planning period. The government policy makers could establish effective industrial structure, water resources utilization patterns and population planning, and to better understand the tradeoffs among economic, water resources, population and environmental objectives.
Ouyang, Qin; Chen, Quansheng; Zhao, Jiewen
2016-02-05
The approach presented herein reports the application of near infrared (NIR) spectroscopy, in contrast with human sensory panel, as a tool for estimating Chinese rice wine quality; concretely, to achieve the prediction of the overall sensory scores assigned by the trained sensory panel. Back propagation artificial neural network (BPANN) combined with adaptive boosting (AdaBoost) algorithm, namely BP-AdaBoost, as a novel nonlinear algorithm, was proposed in modeling. First, the optimal spectra intervals were selected by synergy interval partial least square (Si-PLS). Then, BP-AdaBoost model based on the optimal spectra intervals was established, called Si-BP-AdaBoost model. These models were optimized by cross validation, and the performance of each final model was evaluated according to correlation coefficient (Rp) and root mean square error of prediction (RMSEP) in prediction set. Si-BP-AdaBoost showed excellent performance in comparison with other models. The best Si-BP-AdaBoost model was achieved with Rp=0.9180 and RMSEP=2.23 in the prediction set. It was concluded that NIR spectroscopy combined with Si-BP-AdaBoost was an appropriate method for the prediction of the sensory quality in Chinese rice wine. Copyright © 2015 Elsevier B.V. All rights reserved.
Global Positioning System (GPS) Precipitable Water in Forecasting Lightning at Spaceport Canaveral
NASA Technical Reports Server (NTRS)
Kehrer, Kristen C.; Graf, Brian; Roeder, William
2006-01-01
This paper evaluates the use of precipitable water (PW) from Global Positioning System (GPS) in lightning prediction. Additional independent verification of an earlier model is performed. This earlier model used binary logistic regression with the following four predictor variables optimally selected from a candidate list of 23 candidate predictors: the current precipitable water value for a given time of the day, the change in GPS-PW over the past 9 hours, the KIndex, and the electric field mill value. This earlier model was not optimized for any specific forecast interval, but showed promise for 6 hour and 1.5 hour forecasts. Two new models were developed and verified. These new models were optimized for two operationally significant forecast intervals. The first model was optimized for the 0.5 hour lightning advisories issued by the 45th Weather Squadron. An additional 1.5 hours was allowed for sensor dwell, communication, calculation, analysis, and advisory decision by the forecaster. Therefore the 0.5 hour advisory model became a 2 hour forecast model for lightning within the 45th Weather Squadron advisory areas. The second model was optimized for major ground processing operations supported by the 45th Weather Squadron, which can require lightning forecasts with a lead-time of up to 7.5 hours. Using the same 1.5 lag as in the other new model, this became a 9 hour forecast model for lightning within 37 km (20 NM)) of the 45th Weather Squadron advisory areas. The two new models were built using binary logistic regression from a list of 26 candidate predictor variables: the current GPS-PW value, the change of GPS-PW over 0.5 hour increments from 0.5 to 12 hours, and the K-index. The new 2 hour model found the following for predictors to be statistically significant, listed in decreasing order of contribution to the forecast: the 0.5 hour change in GPS-PW, the 7.5 hour change in GPS-PW, the current GPS-PW value, and the KIndex. The new 9 hour forecast model found the following five independent variables to be statistically significant, listed in decreasing order of contribution to the forecast: the current GPSPW value, the 8.5 hour change in GPS-PW, the 3.5 hour change in GPS-PW, the 12 hour change in GPS-PW, and the K-Index. In both models, the GPS-PW parameters had better correlation to the lightning forecast than the K-Index, a widely used thunderstorm index. Possible future improvements to this study are discussed.
Foster, J D; Ewings, P; Falk, S; Cooper, E J; Roach, H; West, N P; Williams-Yesson, B A; Hanna, G B; Francis, N K
2016-10-01
The optimal time of rectal resection after long-course chemoradiotherapy (CRT) remains unclear. A feasibility study was undertaken for a multi-centre randomized controlled trial evaluating the impact of the interval after chemoradiotherapy on the technical complexity of surgery. Patients with rectal cancer were randomized to either a 6- or 12-week interval between CRT and surgery between June 2012 and May 2014 (ISRCTN registration number: 88843062). For blinded technical complexity assessment, the Observational Clinical Human Reliability Analysis technique was used to quantify technical errors enacted within video recordings of operations. Other measured outcomes included resection completeness, specimen quality, radiological down-staging, tumour cell density down-staging and surgeon-reported technical complexity. Thirty-one patients were enrolled: 15 were randomized to 6 and 16-12 weeks across 7 centres. Fewer eligible patients were identified than had been predicted. Of 23 patients who underwent resection, mean 12.3 errors were observed per case at 6 weeks vs. 10.7 at 12 weeks (p = 0.401). Other measured outcomes were similar between groups. The feasibility of measurement of operative performance of rectal cancer surgery as an endpoint was confirmed in this exploratory study. Recruitment of sufficient numbers of patients represented a challenge, and a proportion of patients did not proceed to resection surgery. These results suggest that interval after CRT may not substantially impact upon surgical technical performance.
Lee, I J; Kim, Y I; Kim, K W; Kim, D H; Ryoo, I; Lee, M W; Chung, J W
2012-11-01
This study was designed to evaluate the extent of the radiofrequency ablation zone in relation to the time interval between transcatheter arterial embolisation (TAE) and radiofrequency ablation (RFA) and, ultimately, to determine the optimal strategy of combining these two therapies for hepatocellular carcinoma. 15 rabbits were evenly divided into three groups: Group A was treated with RFA alone; Group B was treated with TAE immediately followed by RFA; and Group C was treated with TAE followed by RFA 5 days later. All animals underwent perfusion CT (PCT) scans immediately after RFA. Serum liver transaminases were measured to evaluate acute liver damage. Animals were euthanised for pathological analysis of ablated tissues 10 days after RFA. Non-parametric analyses were conducted to compare PCT indices, the RFA zone and liver transaminase levels among the three experimental groups. Group B showed a significantly larger ablation zone than the other two groups. Arterial liver perfusion and hepatic perfusion index represented well the perfusion decrease after TAE on PCT. Although Group B showed the most elevated liver transaminase levels at 1 day post RFA, the enzymes decreased to levels that were not different from the other groups at 10 days post-RFA. When combined TAE and RFA therapy is considered, TAE should be followed by RFA as quickly as possible, as it can be performed safely without serious hepatic deterioration, despite the short interval between the two procedures.
Kishore, Amit; Vail, Andy; Majid, Arshad; Dawson, Jesse; Lees, Kennedy R; Tyrrell, Pippa J; Smith, Craig J
2014-02-01
Atrial fibrillation (AF) confers a high risk of recurrent stroke, although detection methods and definitions of paroxysmal AF during screening vary. We therefore undertook a systematic review and meta-analysis to determine the frequency of newly detected AF using noninvasive or invasive cardiac monitoring after ischemic stroke or transient ischemic attack. Prospective observational studies or randomized controlled trials of patients with ischemic stroke, transient ischemic attack, or both, who underwent any cardiac monitoring for a minimum of 12 hours, were included after electronic searches of multiple databases. The primary outcome was detection of any new AF during the monitoring period. We prespecified subgroup analysis of selected (prescreened or cryptogenic) versus unselected patients and according to duration of monitoring. A total of 32 studies were analyzed. The overall detection rate of any AF was 11.5% (95% confidence interval, 8.9%-14.3%), although the timing, duration, method of monitoring, and reporting of diagnostic criteria used for paroxysmal AF varied. Detection rates were higher in selected (13.4%; 95% confidence interval, 9.0%-18.4%) than in unselected patients (6.2%; 95% confidence interval, 4.4%-8.3%). There was substantial heterogeneity even within specified subgroups. Detection of AF was highly variable, and the review was limited by small sample sizes and marked heterogeneity. Further studies are required to inform patient selection, optimal timing, methods, and duration of monitoring for detection of AF/paroxysmal AF.
The selection of the optimal baseline in the front-view monocular vision system
NASA Astrophysics Data System (ADS)
Xiong, Bincheng; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen
2018-03-01
In the front-view monocular vision system, the accuracy of solving the depth field is related to the length of the inter-frame baseline and the accuracy of image matching result. In general, a longer length of the baseline can lead to a higher precision of solving the depth field. However, at the same time, the difference between the inter-frame images increases, which increases the difficulty in image matching and the decreases matching accuracy and at last may leads to the failure of solving the depth field. One of the usual practices is to use the tracking and matching method to improve the matching accuracy between images, but this algorithm is easy to cause matching drift between images with large interval, resulting in cumulative error in image matching, and finally the accuracy of solving the depth field is still very low. In this paper, we propose a depth field fusion algorithm based on the optimal length of the baseline. Firstly, we analyze the quantitative relationship between the accuracy of the depth field calculation and the length of the baseline between frames, and find the optimal length of the baseline by doing lots of experiments; secondly, we introduce the inverse depth filtering technique for sparse SLAM, and solve the depth field under the constraint of the optimal length of the baseline. By doing a large number of experiments, the results show that our algorithm can effectively eliminate the mismatch caused by image changes, and can still solve the depth field correctly in the large baseline scene. Our algorithm is superior to the traditional SFM algorithm in time and space complexity. The optimal baseline obtained by a large number of experiments plays a guiding role in the calculation of the depth field in front-view monocular.
A dynamic optimization model of the diel vertical distribution of a pelagic planktivorous fish
NASA Astrophysics Data System (ADS)
Rosland, Rune; Giske, Jarl
A stochastic dynamic optimization model for the diel depth distribution of juveniles and adults of the mesopelagic planktivore Maurolicus muelleri (Gmelin) is developed and used for a winter situation. Observations from Masfjorden, western Norway, reveal differences in vertical distribution, growth and mortality between juveniles and adults in January. Juveniles stay within the upper 100m with high feeding rates, while adults stay within the 100-150m zone with very low feeding rates during the diel cycle. The difference in depth profitability is assumed to be caused by age-dependent processes, and are calculated from a mechanistic model for visual feeding. The environment is described as a set of habitats represented by discrete depth intervals along the vertical axis, differing with respect to light intensity, food abundance, predation risk and temperature. The short time interval (24h) allows fitness to be linearly related to growth (feeding), assuming that growth increases the future reproductive output of the fish. Optimal depth position is calculated from balancing feeding opportunity against mortality risk, where the fitness reward gained by feeding is weighted against the danger of being killed by a predator. A basic run is established, and the model is validated by comparing predictions and observations. The sensitivity for different parameter values is also tested. The modelled vertical distributions and feeding patterns of juvenile and adult fish correspond well with the observations, and the assumption of age differences in mortality-feeding trade-offs seems adequate to explain the different depth profitability of the two age groups. The results indicate a preference for crepuscular feeding activity of the juveniles, and the vertical distribution of zooplankton seems to be the most important environmental factor regulating the adult depth position during the winter months in Masfjorden.
Uchida, Thomas K.; Sherman, Michael A.; Delp, Scott L.
2015-01-01
Impacts are instantaneous, computationally efficient approximations of collisions. Current impact models sacrifice important physical principles to achieve that efficiency, yielding qualitative and quantitative errors when applied to simultaneous impacts in spatial multibody systems. We present a new impact model that produces behaviour similar to that of a detailed compliant contact model, while retaining the efficiency of an instantaneous method. In our model, time and configuration are fixed, but the impact is resolved into distinct compression and expansion phases, themselves comprising sliding and rolling intervals. A constrained optimization problem is solved for each interval to compute incremental impulses while respecting physical laws and principles of contact mechanics. We present the mathematical model, algorithms for its practical implementation, and examples that demonstrate its effectiveness. In collisions involving materials of various stiffnesses, our model can be more than 20 times faster than integrating through the collision using a compliant contact model. This work extends the use of instantaneous impact models to scientific and engineering applications with strict accuracy requirements, where compliant contact models would otherwise be required. An open-source implementation is available in Simbody, a C++ multibody dynamics library widely used in biomechanical and robotic applications. PMID:27547093
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
High resolution data acquisition
Thornton, G.W.; Fuller, K.R.
1993-04-06
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock, pulse train, and analog circuitry for generating a triangular wave synchronously with the pulse train (as seen in diagram on patent). The triangular wave has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter counts the clock pulse train during the interval to form a gross event interval time. A computer then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
High resolution data acquisition
Thornton, Glenn W.; Fuller, Kenneth R.
1993-01-01
A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.
Birth Spacing of Pregnant Women in Nepal: A Community-Based Study.
Karkee, Rajendra; Lee, Andy H
2016-01-01
Optimal birth spacing has health advantages for both mother and child. In developing countries, shorter birth intervals are common and associated with social, cultural, and economic factors, as well as a lack of family planning. This study investigated the first birth interval after marriage and preceding interbirth interval in Nepal. A community-based prospective cohort study was conducted in the Kaski district of Nepal. Information on birth spacing, demographic, and obstetric characteristics was obtained from 701 pregnant women using a structured questionnaire. Logistic regression analyses were performed to ascertain factors associated with short birth spacing. About 39% of primiparous women gave their first child birth within 1 year of marriage and 23% of multiparous women had short preceding interbirth intervals (<24 months). The average birth spacing among the multiparous group was 44.9 (SD 21.8) months. Overall, short birth spacing appeared to be inversely associated with advancing maternal age. For the multiparous group, Janajati and lower caste women, and those whose newborn was female, were more likely to have short birth spacing. The preceding interbirth interval was relatively long in the Kaski district of Nepal and tended to be associated with maternal age, caste, and sex of newborn infant. Optimal birth spacing programs should target Janajati and lower caste women, along with promotion of gender equality in society.
McDevitt, Joseph L; Acosta-Torres, Stefany; Zhang, Ning; Hu, Tianshen; Odu, Ayobami; Wang, Jijia; Xi, Yin; Lamus, Daniel; Miller, David S; Pillai, Anil K
2017-07-01
To estimate the least costly routine exchange frequency for percutaneous nephrostomies (PCNs) placed for malignant urinary obstruction, as measured by annual hospital charges, and to estimate the financial impact of patient compliance. Patients with PCNs placed for malignant urinary obstruction were studied from 2011 to 2013. Exchanges were classified as routine or due to 1 of 3 complication types: mechanical (tube dislodgment), obstruction, or infection. Representative cases were identified, and median representative charges were used as inputs for the model. Accelerated failure time and Markov chain Monte Carlo models were used to estimate distribution of exchange types and annual hospital charges under different routine exchange frequency and compliance scenarios. Long-term PCN management was required in 57 patients, with 87 total exchange encounters. Median representative hospital charges for pyelonephritis and obstruction were 11.8 and 9.3 times greater, respectively, than a routine exchange. The projected proportion of routine exchanges increased and the projected proportion of infection-related exchanges decreased when moving from a 90-day exchange with 50% compliance to a 60-day exchange with 75% compliance, and this was associated with a projected reduction in annual charges. Projected cost reductions resulting from increased compliance were generally greater than reductions resulting from changes in exchange frequency. This simulation model suggests that the optimal routine exchange interval for PCN exchange in patients with malignant urinary obstruction is approximately 60 days and that the degree of reduction in charges likely depends more on patient compliance than exact exchange interval. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Emery, Jon D; Walter, Fiona M; Gray, Vicky; Sinclair, Craig; Howting, Denise; Bulsara, Max; Bulsara, Caroline; Webster, Andrew; Auret, Kirsten; Saunders, Christobel; Nowak, Anna; Holman, C D'Arcy
2013-06-01
Previous studies have focused on the treatment received by rural cancer patients and have not examined their diagnostic pathways as reasons for poorer outcomes in rural Australia. To compare and explore symptom appraisal and help-seeking behaviour in patients with breast, lung, prostate or colorectal cancer from rural Western Australia (WA). A mixed-methods study of people recently diagnosed with breast, lung, prostate or colorectal cancer from rural WA. The time from first symptom to diagnosis (i.e. total diagnostic interval, TDI) was calculated from interviews and medical records. Sixty-six participants were recruited (24 breast, 20 colorectal, 14 prostate and 8 lung cancer patients). There was a highly significant difference in time from symptom onset to seeking help between cancers (P = 0.006). Geometric mean symptom appraisal for colorectal cancer was significantly longer than that for breast and lung cancers [geometric mean differences: 2.58 (95% confidence interval, CI: 0.64-4.53), P = 0.01; 3.97 (1.63-6.30), P = 0.001, respectively]. There was a significant overall difference in arithmetic mean TDI (P = 0.046); breast cancer TDI was significantly shorter than colorectal or prostate cancer TDI [mean difference : 266.3 days (95% CI: 45.9-486.8), P = 0.019; 277.0 days, (32.1-521.9), P = 0.027, respectively]. These differences were explained by the nature and personal interpretation of symptoms, perceived as well as real problems of access to health care, optimism, stoicism, machismo, fear, embarrassment and competing demands. Longer symptom appraisal was observed for colorectal cancer. Participants defined core characteristics of rural Australians as optimism, stoicism and machismo. These features, as well as access to health care, contribute to later presentation of cancer.
Stirling, Aaron D; Moran, Neil R; Kelly, Michael E; Ridgway, Paul F; Conlon, Kevin C
2017-10-01
Using revised Atlanta classification defined outcomes, we compare absolute values in C-reactive protein (CRP), with interval changes in CRP, for severity stratification in acute pancreatitis (AP). A retrospective study of all first incidence AP was conducted over a 5-year period. Interval change in CRP values from admission to day 1, 2 and 3 was compared against the absolute values. Receiver-operator characteristic (ROC) curve and likelihood ratios (LRs) were used to compare ability to predict severe and mild disease. 337 cases of first incidence AP were included in our analysis. ROC curve analysis demonstrated the second day as the most useful time for repeat CRP measurement. A CRP interval change >90 mg/dL at 48 h (+LR 2.15, -LR 0.26) was equivalent to an absolute value of >150 mg/dL within 48 h (+LR 2.32, -LR 0.25). The optimal cut-off for absolute CRP based on new, more stringent definition of severity was >190 mg/dL (+LR 2.72, -LR 0.24). Interval change in CRP is a comparable measure to absolute CRP in the prognostication of AP severity. This study suggests a rise of >90 mg/dL from admission or an absolute value of >190 mg/dL at 48 h predicts severe disease with the greatest accuracy. Copyright © 2017 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
Dominion. A game exploring information exploitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Jacob Aaron
FlipIt is a game theoretic framework published in 2012[1] to investigate optimal strategies for managing security resources in response to Advanced Persistent Threats. It is a two-player game wherein a resource is controlled by exactly one player at any time. A player may move at any time to capture the resource, incurring a move cost, and is informed of the last time their opponent has moved only upon completing their move. Thus, moves may be wasted and takeover is considered \\stealthy", with regard to the other player. The game is played for an unlimited period of time, and the goalmore » of each player is to maximize the amount of time they are in control of the resource minus their total move cost, normalized by the current length of play. Marten Van Dijk and others[1] provided an analysis of various player strategies and proved optimal results for certain subclasses of players. We extend their work by providing a reformulation of the original game, wherein the optimal player strategies can be solved exactly, rather than only for certain subclasses. We call this reformulation Dominion, and place it within a broader framework of stealthy move games. We de ne Dominion to occur over a nite time scale (from 0 to 1), and give each player a certain number of moves to make within the time frame. Their expected score in this new scenario is the expected amount of time they have control, and the point of the game is to dominate as much of the unit interval as possible. We show how Dominion can be treated as a two player, simultaneous, constant sum, unit square game, where the gradient of the bene t curves for the players are linear and possibly discontinuous. We derive Nash equilibria for a basic version of Dominion, and then further explore the roles of information asymmetry in its variants. We extend these results to FlipIt and other cyber security applications.« less
Measuring the EMS patient access time interval and the impact of responding to high-rise buildings.
Morrison, Laurie J; Angelini, Mark P; Vermeulen, Marian J; Schwartz, Brian
2005-01-01
To measure the patient access time interval and characterize its contribution to the total emergency medical services (EMS) response time interval; to compare the patient access time intervals for patients located three or more floors above ground with those less than three floors above or below ground, and specifically in the apartment subgroup; and to identify barriers that significantly impede EMS access to patients in high-rise apartments. An observational study of all patients treated by an emergency medical technician paramedics (EMT-P) crew was conducted using a trained independent observer to collect time intervals and identify potential barriers to access. Of 118 observed calls, 25 (21%) originated from patients three or more floors above ground. The overall median and 90th percentile (95% confidence interval) patient access time intervals were 1.61 (1.27, 1.91) and 3.47 (3.08, 4.05) minutes, respectively. The median interval was 2.73 (2.22, 3.03) minutes among calls from patients located three or more stories above ground compared with 1.25 (1.07, 1.55) minutes among those at lower levels. The patient access time interval represented 23.5% of the total EMS response time interval among calls originating less than three floors above or below ground and 32.2% of those located three or more stories above ground. The most frequently encountered barriers to access included security code entry requirements, lack of directional signs, and inability to fit the stretcher into the elevator. The patient access time interval is significantly long and represents a substantial component of the total EMS response time interval, especially among ambulance calls originating three or more floors above ground. A number of barriers appear to contribute to delayed paramedic access.
Modulation of human time processing by subthalamic deep brain stimulation.
Wojtecki, Lars; Elben, Saskia; Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥ 130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥ 130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds.
Modulation of Human Time Processing by Subthalamic Deep Brain Stimulation
Timmermann, Lars; Reck, Christiane; Maarouf, Mohammad; Jörgens, Silke; Ploner, Markus; Südmeyer, Martin; Groiss, Stefan Jun; Sturm, Volker; Niedeggen, Michael; Schnitzler, Alfons
2011-01-01
Timing in the range of seconds referred to as interval timing is crucial for cognitive operations and conscious time processing. According to recent models of interval timing basal ganglia (BG) oscillatory loops are involved in time interval recognition. Parkinsońs disease (PD) is a typical disease of the basal ganglia that shows distortions in interval timing. Deep brain stimulation (DBS) of the subthalamic nucleus (STN) is a powerful treatment of PD which modulates motor and cognitive functions depending on stimulation frequency by affecting subcortical-cortical oscillatory loops. Thus, for the understanding of BG-involvement in interval timing it is of interest whether STN-DBS can modulate timing in a frequency dependent manner by interference with oscillatory time recognition processes. We examined production and reproduction of 5 and 15 second intervals and millisecond timing in a double blind, randomised, within-subject repeated-measures design of 12 PD-patients applying no, 10-Hz- and ≥130-Hz-STN-DBS compared to healthy controls. We found under(re-)production of the 15-second interval and a significant enhancement of this under(re-)production by 10-Hz-stimulation compared to no stimulation, ≥130-Hz-STN-DBS and controls. Milliseconds timing was not affected. We provide first evidence for a frequency-specific modulatory effect of STN-DBS on interval timing. Our results corroborate the involvement of BG in general and of the STN in particular in the cognitive representation of time intervals in the range of multiple seconds. PMID:21931767
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
The Optimal Timing of Stage-2-Palliation After the Norwood Operation.
Meza, James M; Hickey, Edward; McCrindle, Brian; Blackstone, Eugene; Anderson, Brett; Overman, David; Kirklin, James K; Karamlou, Tara; Caldarone, Christopher; Kim, Richard; DeCampli, William; Jacobs, Marshall; Guleserian, Kristine; Jacobs, Jeffrey P; Jaquiss, Robert
2018-01-01
The effect of the timing of stage-2-palliation (S2P) on survival through single ventricle palliation remains unknown. This study investigated the optimal timing of S2P that minimizes pre-S2P attrition and maximizes post-S2P survival. The Congenital Heart Surgeons' Society's critical left ventricular outflow tract obstruction cohort was used. Survival analysis was performed using multiphase parametric hazard analysis. Separate risk factors for death after the Norwood and after S2P were identified. Based on the multivariable models, infants were stratified as low, intermediate, or high risk. Cumulative 2-year, post-Norwood survival was predicted. Optimal timing was determined using conditional survival analysis and plotted as 2-year, post-Norwood survival versus age at S2P. A Norwood operation was performed in 534 neonates from 21 institutions. The S2P was performed in 71%, at a median age of 5.1 months (IQR: 4.3 to 6.0), and 22% died after Norwood. By 5 years after S2P, 10% of infants had died. For low- and intermediate-risk infants, performing S2P after age 3 months was associated with 89% ± 3% and 82% ± 3% 2-year survival, respectively. Undergoing an interval cardiac reoperation or moderate-severe right ventricular dysfunction before S2P were high-risk features. Among high-risk infants, 2-year survival was 63% ± 5%, and even lower when S2P was performed before age 6 months. Performing S2P after age 3 months may optimize survival of low- and intermediate-risk infants. High-risk infants are unlikely to complete three-stage palliation, and early S2P may increase their risk of mortality. We infer that early referral for cardiac transplantation may increase their chance of survival. Copyright © 2018 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Nguyen, Oanh Kieu; Makam, Anil N; Clark, Christopher; Zhang, Song; Das, Sandeep R; Halm, Ethan A
2018-04-17
Readmissions after hospitalization for acute myocardial infarction (AMI) are common. However, the few currently available AMI readmission risk prediction models have poor-to-modest predictive ability and are not readily actionable in real time. We sought to develop an actionable and accurate AMI readmission risk prediction model to identify high-risk patients as early as possible during hospitalization. We used electronic health record data from consecutive AMI hospitalizations from 6 hospitals in north Texas from 2009 to 2010 to derive and validate models predicting all-cause nonelective 30-day readmissions, using stepwise backward selection and 5-fold cross-validation. Of 826 patients hospitalized with AMI, 13% had a 30-day readmission. The first-day AMI model (the AMI "READMITS" score) included 7 predictors: renal function, elevated brain natriuretic peptide, age, diabetes mellitus, nonmale sex, intervention with timely percutaneous coronary intervention, and low systolic blood pressure, had an optimism-corrected C-statistic of 0.73 (95% confidence interval, 0.71-0.74) and was well calibrated. The full-stay AMI model, which included 3 additional predictors (use of intravenous diuretics, anemia on discharge, and discharge to postacute care), had an optimism-corrected C-statistic of 0.75 (95% confidence interval, 0.74-0.76) with minimally improved net reclassification and calibration. Both AMI models outperformed corresponding multicondition readmission models. The parsimonious AMI READMITS score enables early prospective identification of high-risk AMI patients for targeted readmissions reduction interventions within the first 24 hours of hospitalization. A full-stay AMI readmission model only modestly outperformed the AMI READMITS score in terms of discrimination, but surprisingly did not meaningfully improve reclassification. © 2018 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
An approach to solve group-decision-making problems with ordinal interval numbers.
Fan, Zhi-Ping; Liu, Yang
2010-10-01
The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.
Valls, Joan; Castellà, Gerard; Dyba, Tadeusz; Clèries, Ramon
2015-06-01
Predicting the future burden of cancer is a key issue for health services planning, where a method for selecting the predictive model and the prediction base is a challenge. A method, named here Goodness-of-Fit optimal (GoF-optimal), is presented to determine the minimum prediction base of historical data to perform 5-year predictions of the number of new cancer cases or deaths. An empirical ex-post evaluation exercise for cancer mortality data in Spain and cancer incidence in Finland using simple linear and log-linear Poisson models was performed. Prediction bases were considered within the time periods 1951-2006 in Spain and 1975-2007 in Finland, and then predictions were made for 37 and 33 single years in these periods, respectively. The performance of three fixed different prediction bases (last 5, 10, and 20 years of historical data) was compared to that of the prediction base determined by the GoF-optimal method. The coverage (COV) of the 95% prediction interval and the discrepancy ratio (DR) were calculated to assess the success of the prediction. The results showed that (i) models using the prediction base selected through GoF-optimal method reached the highest COV and the lowest DR and (ii) the best alternative strategy to GoF-optimal was the one using the base of prediction of 5-years. The GoF-optimal approach can be used as a selection criterion in order to find an adequate base of prediction. Copyright © 2015 Elsevier Ltd. All rights reserved.
Optimization of a Multi-Product Intra-Supply Chain System with Failure in Rework.
Chiu, Singa Wang; Chen, Shin-Wei; Chang, Chih-Kai; Chiu, Yuan-Shyi Peter
2016-01-01
Globalization has created tremendous opportunities, but also made business environment highly competitive and turbulent. To gain competitive advantage, management of present-day transnational firms always seeks options to trim down various transaction and coordination costs, especially in the area of controllable intra-supply chain system. This study investigates a multi-product intra-supply chain system with failure in rework. To achieve maximum machine utilization, multiple products are fabricated in succession on a single machine. During the process, production of some defective items is inevitable. Reworking of nonconforming items is used to reduce the quality cost in production and achieving the goal of lower overall production cost. Because reworks are sometimes unsuccessful, failures in rework are also considered in this study. Finished goods for each product are transported to the sales offices when the entire production lot is quality assured after rework. A multi-delivery policy is used, wherein fixed quantity n installments of the finished lot are transported at fixed intervals during delivery time. The objective is to jointly determine the common production cycle time and the number of deliveries needed to minimize the long-term expected production-inventory-delivery costs for the problem. With the help of a mathematical model along with optimization technique, the optimal production-shipment policy is obtained. We have used a numerical example to demonstrate applicability of the result of our research.
Optimization of a Multi–Product Intra-Supply Chain System with Failure in Rework
2016-01-01
Globalization has created tremendous opportunities, but also made business environment highly competitive and turbulent. To gain competitive advantage, management of present-day transnational firms always seeks options to trim down various transaction and coordination costs, especially in the area of controllable intra-supply chain system. This study investigates a multi–product intra-supply chain system with failure in rework. To achieve maximum machine utilization, multiple products are fabricated in succession on a single machine. During the process, production of some defective items is inevitable. Reworking of nonconforming items is used to reduce the quality cost in production and achieving the goal of lower overall production cost. Because reworks are sometimes unsuccessful, failures in rework are also considered in this study. Finished goods for each product are transported to the sales offices when the entire production lot is quality assured after rework. A multi-delivery policy is used, wherein fixed quantity n installments of the finished lot are transported at fixed intervals during delivery time. The objective is to jointly determine the common production cycle time and the number of deliveries needed to minimize the long–term expected production–inventory–delivery costs for the problem. With the help of a mathematical model along with optimization technique, the optimal production–shipment policy is obtained. We have used a numerical example to demonstrate applicability of the result of our research. PMID:27918588
A sequential solution for anisotropic total variation image denoising with interval constraints
NASA Astrophysics Data System (ADS)
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.
Optimal go/no-go ratios to maximize false alarms.
Young, Michael E; Sutherland, Steven C; McCoy, Anthony W
2018-06-01
Despite the ubiquity of go/no-go tasks in the study of behavioral inhibition, there is a lack of evidence regarding the impact of key design characteristics, including the go/no-go ratio, intertrial interval, and number of types of go stimuli, on the production of different response classes of central interest. In the present study we sought to empirically determine the optimal conditions to maximize the production of a rare outcome of considerable interest to researchers: false alarms. As predicted, the shortest intertrial intervals (450 ms), intermediate go/no-go ratios (2:1 to 4:1), and the use of multiple types of go stimuli produced the greatest numbers of false alarms. These results are placed within the context of behavioral changes during learning.
Huffman, Jeff C.; Beale, Eleanor E.; Celano, Christopher M.; Beach, Scott R.; Belcher, Arianna M.; Moore, Shannon V.; Suarez, Laura; Motiwala, Shweta R.; Gandhi, Parul U.; Gaggin, Hanna; Januzzi, James L.
2015-01-01
Background Positive psychological constructs, such as optimism, are associated with beneficial health outcomes. However, no study has separately examined the effects of multiple positive psychological constructs on behavioral, biological, and clinical outcomes after an acute coronary syndrome (ACS). Accordingly, we aimed to investigate associations of baseline optimism and gratitude with subsequent physical activity, prognostic biomarkers, and cardiac rehospitalizations in post-ACS patients. Methods and Results Participants were enrolled during admission for ACS and underwent assessments at baseline (2 weeks post-ACS) and follow-up (6 months later). Associations between baseline positive psychological constructs and subsequent physical activity/biomarkers were analyzed using multivariable linear regression. Associations between baseline positive constructs and 6-month rehospitalizations were assessed via multivariable Cox regression. Overall, 164 participants enrolled and completed the baseline 2-week assessments. Baseline optimism was significantly associated with greater physical activity at 6 months (n=153; β=102.5; 95% confidence interval [13.6-191.5]; p=.024), controlling for baseline activity and sociodemographic, medical, and negative psychological covariates. Baseline optimism was also associated with lower rates of cardiac readmissions at 6 months (N=164), controlling for age, gender, and medical comorbidity (hazard ratio=.92; 95% confidence interval [.86-.98]; p=.006). There were no significant relationships between optimism and biomarkers. Gratitude was minimally associated with post-ACS outcomes. Conclusions Post-ACS optimism, but not gratitude, was prospectively and independently associated with superior physical activity and fewer cardiac readmissions. Whether interventions that target optimism can successfully increase optimism or improve cardiovascular outcomes in post-ACS patients is not yet known, but can be tested in future studies. Clinical Trial Registration URL: http://www.clinicaltrials.gov. Unique identifier: NCT01709669. PMID:26646818
Optimism and Cause-Specific Mortality: A Prospective Cohort Study.
Kim, Eric S; Hagan, Kaitlin A; Grodstein, Francine; DeMeo, Dawn L; De Vivo, Immaculata; Kubzansky, Laura D
2017-01-01
Growing evidence has linked positive psychological attributes like optimism to a lower risk of poor health outcomes, especially cardiovascular disease. It has been demonstrated in randomized trials that optimism can be learned. If associations between optimism and broader health outcomes are established, it may lead to novel interventions that improve public health and longevity. In the present study, we evaluated the association between optimism and cause-specific mortality in women after considering the role of potential confounding (sociodemographic characteristics, depression) and intermediary (health behaviors, health conditions) variables. We used prospective data from the Nurses' Health Study (n = 70,021). Dispositional optimism was measured in 2004; all-cause and cause-specific mortality rates were assessed from 2006 to 2012. Using Cox proportional hazard models, we found that a higher degree of optimism was associated with a lower mortality risk. After adjustment for sociodemographic confounders, compared with women in the lowest quartile of optimism, women in the highest quartile had a hazard ratio of 0.71 (95% confidence interval: 0.66, 0.76) for all-cause mortality. Adding health behaviors, health conditions, and depression attenuated but did not eliminate the associations (hazard ratio = 0.91, 95% confidence interval: 0.85, 0.97). Associations were maintained for various causes of death, including cancer, heart disease, stroke, respiratory disease, and infection. Given that optimism was associated with numerous causes of mortality, it may provide a valuable target for new research on strategies to improve health. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Huffman, Jeff C; Beale, Eleanor E; Celano, Christopher M; Beach, Scott R; Belcher, Arianna M; Moore, Shannon V; Suarez, Laura; Motiwala, Shweta R; Gandhi, Parul U; Gaggin, Hanna K; Januzzi, James L
2016-01-01
Positive psychological constructs, such as optimism, are associated with beneficial health outcomes. However, no study has separately examined the effects of multiple positive psychological constructs on behavioral, biological, and clinical outcomes after an acute coronary syndrome (ACS). Accordingly, we aimed to investigate associations of baseline optimism and gratitude with subsequent physical activity, prognostic biomarkers, and cardiac rehospitalizations in post-ACS patients. Participants were enrolled during admission for ACS and underwent assessments at baseline (2 weeks post-ACS) and follow-up (6 months later). Associations between baseline positive psychological constructs and subsequent physical activity/biomarkers were analyzed using multivariable linear regression. Associations between baseline positive constructs and 6-month rehospitalizations were assessed via multivariable Cox regression. Overall, 164 participants enrolled and completed the baseline 2-week assessments. Baseline optimism was significantly associated with greater physical activity at 6 months (n=153; β=102.5; 95% confidence interval, 13.6-191.5; P=0.024), controlling for baseline activity and sociodemographic, medical, and negative psychological covariates. Baseline optimism was also associated with lower rates of cardiac readmissions at 6 months (n=164), controlling for age, sex, and medical comorbidity (hazard ratio, 0.92; 95% confidence interval, [0.86-0.98]; P=0.006). There were no significant relationships between optimism and biomarkers. Gratitude was minimally associated with post-ACS outcomes. Post-ACS optimism, but not gratitude, was prospectively and independently associated with superior physical activity and fewer cardiac readmissions. Whether interventions that target optimism can successfully increase optimism or improve cardiovascular outcomes in post-ACS patients is not yet known, but can be tested in future studies. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01709669. © 2015 American Heart Association, Inc.
Welker, A; Wolcke, B; Schleppers, A; Schmeck, S B; Focke, U; Gervais, H W; Schmeck, J
2010-10-01
The introduction of the diagnosis-related groups reimbursement system has increased cost pressures. Due to the interaction of many different professional groups, analysis and optimization of internal coordination and scheduling in the operating room (OR) is mandatory. The aim of this study was to analyze the processes at a university hospital in order to optimize strategies by identifying potential weak points. Over a period 6 weeks before and 4 weeks after intervention processes time intervals in the OR of a tertiary care hospital (university hospital) were documented in a structured data collection sheet. The main reason for lack of efficiency of labor was underused OR utilization. Multifactorial reasons, particularly in the management of perioperative interfaces, led to vacant ORs. A significant deficit was in the use of OR capacity at the end of the daily OR schedule. After harmonization of working hours of different staff groups and implementation of several other changes an increase in efficiency could be verified. These results indicate that optimization of perioperative processes considerably contribute to the success of OR organization. Additionally, the implementation of standard operating procedures and a generally accepted OR statute are mandatory. In this way an efficient OR management can contribute to the economic success of a hospital.
Pi, Erxu; Mantri, Nitin; Ngai, Sai Ming; Lu, Hongfei; Du, Liqun
2013-01-01
Temperature is one of the most significant environmental factors that affects germination of grass seeds. Reliable prediction of the optimal temperature for seed germination is crucial for determining the suitable regions and favorable sowing timing for turf grass cultivation. In this study, a back-propagation-artificial-neural-network-aided dual quintic equation (BP-ANN-QE) model was developed to improve the prediction of the optimal temperature for seed germination. This BP-ANN-QE model was used to determine optimal sowing times and suitable regions for three Cynodon dactylon cultivars (C. dactylon, ‘Savannah’ and ‘Princess VII’). Prediction of the optimal temperature for these seeds was based on comprehensive germination tests using 36 day/night (high/low) temperature regimes (both ranging from 5/5 to 40/40°C with 5°C increments). Seed germination data from these temperature regimes were used to construct temperature-germination correlation models for estimating germination percentage with confidence intervals. Our tests revealed that the optimal high/low temperature regimes required for all the three bermudagrass cultivars are 30/5, 30/10, 35/5, 35/10, 35/15, 35/20, 40/15 and 40/20°C; constant temperatures ranging from 5 to 40°C inhibited the germination of all three cultivars. While comparing different simulating methods, including DQEM, Bisquare ANN-QE, and BP-ANN-QE in establishing temperature based germination percentage rules, we found that the R2 values of germination prediction function could be significantly improved from about 0.6940–0.8177 (DQEM approach) to 0.9439–0.9813 (BP-ANN-QE). These results indicated that our BP-ANN-QE model has better performance than the rests of the compared models. Furthermore, data of the national temperature grids generated from monthly-average temperature for 25 years were fit into these functions and we were able to map the germination percentage of these C. dactylon cultivars in the national scale of China, and suggested the optimum sowing regions and times for them. PMID:24349278
Almenning, Ida; Rieber-Mohn, Astrid; Lundgren, Kari Margrethe; Shetelig Løvvik, Tone; Garnæs, Kirsti Krohn; Moholdt, Trine
2015-01-01
Polycystic ovary syndrome is a common endocrinopathy in reproductive-age women, and associates with insulin resistance. Exercise is advocated in this disorder, but little knowledge exists on the optimal exercise regimes. We assessed the effects of high intensity interval training and strength training on metabolic, cardiovascular, and hormonal outcomes in women with polycystic ovary syndrome. Three-arm parallel randomized controlled trial. Thirty-one women with polycystic ovary syndrome (age 27.2 ± 5.5 years; body mass index 26.7 ± 6.0 kg/m2) were randomly assigned to high intensity interval training, strength training, or a control group. The exercise groups exercised three times weekly for 10 weeks. The main outcome measure was change in homeostatic assessment of insulin resistance (HOMA-IR). HOMA-IR improved significantly only after high intensity interval training, by -0.83 (95% confidence interval [CI], -1.45, -0.20), equal to 17%, with between-group difference (p = 0.014). After high intensity interval training, high-density lipoprotein cholesterol increased by 0.2 (95% CI, 0.02, 0.5) mmol/L, with between group difference (p = 0.04). Endothelial function, measured as flow-mediated dilatation of the brachial artery, increased significantly after high intensity interval training, by 2.0 (95% CI, 0.1, 4.0) %, between-group difference (p = 0.08). Fat percentage decreased significantly after both exercise regimes, without changes in body weight. After strength training, anti-Müllarian hormone was significantly reduced, by -14.8 (95% CI, -21.2, -8.4) pmol/L, between-group difference (p = 0.04). There were no significant changes in high-sensitivity C-reactive protein, adiponectin or leptin in any group. High intensity interval training for ten weeks improved insulin resistance, without weight loss, in women with polycystic ovary syndrome. Body composition improved significantly after both strength training and high intensity interval training. This pilot study indicates that exercise training can improve the cardiometabolic profile in polycystic ovary syndrome in the absence of weight loss. ClinicalTrial.gov NCT01919281.
Department of Defense Precise Time and Time Interval program improvement plan
NASA Technical Reports Server (NTRS)
Bowser, J. R.
1981-01-01
The United States Naval Observatory is responsible for ensuring uniformity in precise time and time interval operations including measurements, the establishment of overall DOD requirements for time and time interval, and the accomplishment of objectives requiring precise time and time interval with minimum cost. An overview of the objectives, the approach to the problem, the schedule, and a status report, including significant findings relative to organizational relationships, current directives, principal PTTI users, and future requirements as currently identified by the users are presented.
Jensen, Morten Hasselstrøm; Christensen, Toke Folke; Tarnow, Lise; Seto, Edmund; Dencker Johansen, Mette; Hejlesen, Ole Kristian
2013-07-01
Hypoglycemia is a potentially fatal condition. Continuous glucose monitoring (CGM) has the potential to detect hypoglycemia in real time and thereby reduce time in hypoglycemia and avoid any further decline in blood glucose level. However, CGM is inaccurate and shows a substantial number of cases in which the hypoglycemic event is not detected by the CGM. The aim of this study was to develop a pattern classification model to optimize real-time hypoglycemia detection. Features such as time since last insulin injection and linear regression, kurtosis, and skewness of the CGM signal in different time intervals were extracted from data of 10 male subjects experiencing 17 insulin-induced hypoglycemic events in an experimental setting. Nondiscriminative features were eliminated with SEPCOR and forward selection. The feature combinations were used in a Support Vector Machine model and the performance assessed by sample-based sensitivity and specificity and event-based sensitivity and number of false-positives. The best model was composed by using seven features and was able to detect 17 of 17 hypoglycemic events with one false-positive compared with 12 of 17 hypoglycemic events with zero false-positives for the CGM alone. Lead-time was 14 min and 0 min for the model and the CGM alone, respectively. This optimized real-time hypoglycemia detection provides a unique approach for the diabetes patient to reduce time in hypoglycemia and learn about patterns in glucose excursions. Although these results are promising, the model needs to be validated on CGM data from patients with spontaneous hypoglycemic events.
Mao, Fangjie; Zhou, Guomo; Li, Pingheng; Du, Huaqiang; Xu, Xiaojun; Shi, Yongjun; Mo, Lufeng; Zhou, Yufeng; Tu, Guoqing
2017-04-15
The selective cutting method currently used in Moso bamboo forests has resulted in a reduction of stand productivity and carbon sequestration capacity. Given the time and labor expense involved in addressing this problem manually, simulation using an ecosystem model is the most suitable approach. The BIOME-BGC model was improved to suit managed Moso bamboo forests, which was adapted to include age structure, specific ecological processes and management measures of Moso bamboo forest. A field selective cutting experiment was done in nine plots with three cutting intensities (high-intensity, moderate-intensity and low-intensity) during 2010-2013, and biomass of these plots was measured for model validation. Then four selective cutting scenarios were simulated by the improved BIOME-BGC model to optimize the selective cutting timings, intervals, retained ages and intensities. The improved model matched the observed aboveground carbon density and yield of different plots, with a range of relative error from 9.83% to 15.74%. The results of different selective cutting scenarios suggested that the optimal selective cutting measure should be cutting 30% culms of age 6, 80% culms of age 7, and all culms thereafter (above age 8) in winter every other year. The vegetation carbon density and harvested carbon density of this selective cutting method can increase by 74.63% and 21.5%, respectively, compared with the current selective cutting measure. The optimized selective cutting measure developed in this study can significantly promote carbon density, yield, and carbon sink capacity in Moso bamboo forests. Copyright © 2017 Elsevier Ltd. All rights reserved.
Study of switching electric circuits with DC hybrid breaker, one stage
NASA Astrophysics Data System (ADS)
Niculescu, T.; Marcu, M.; Popescu, F. G.
2016-06-01
The paper presents a method of extinguishing the electric arc that occurs between the contacts of direct current breakers. The method consists of using an LC type extinguishing group to be optimally sized. From this point of view is presented a theoretical approach to the phenomena that occurs immediately after disconnecting the load and the specific diagrams are drawn. Using these, the elements extinguishing group we can choose. At the second part of the paper there is presented an analyses of the circuit switching process by decomposing the process in particular time sequences. For every time interval there was conceived a numerical simulation model in MATLAB-SIMULINK medium which integrates the characteristic differential equation and plots the capacitor voltage variation diagram and the circuit dumping current diagram.
Analysis of Effluent Gases During the CCVD Growth of Multi Wall Carbon Nanotubes from Acetylene
NASA Technical Reports Server (NTRS)
Schmitt, T. C.; Biris, A. S.; Miller, D. W.; Biris, A. R.; Lupu, D.; Trigwell, S.; Rahman, Z. U.
2005-01-01
Catalytic chemical vapor deposition was used to grow multi-walled carbon nanotubes on a Fe:Co:CaCO3 catalyst from acetylene. The influent and effluent gases were analyzed by gas chromatography and mass spectrometry at different time intervals during the nanotubes growth process in order to better understand and optimize the overall reaction. A large number of byproducts were identified and it was found that the number and the level for some of the carbon byproducts significantly increased over time. The CaCO3 catalytic support thermally decomposed into CaO and CO2 resulting in a mixture of two catalysts for growing the nanotubes, which were found to have outer diameters belonging to two main groups 8 to 35 nm and 40 to 60 nm, respectively.
End-to-End Flow Control for Visual-Haptic Communication under Bandwidth Change
NASA Astrophysics Data System (ADS)
Yashiro, Daisuke; Tian, Dapeng; Yakoh, Takahiro
This paper proposes an end-to-end flow controller for visual-haptic communication. A visual-haptic communication system transmits non-real-time packets, which contain large-size visual data, and real-time packets, which contain small-size haptic data. When the transmission rate of visual data exceeds the communication bandwidth, the visual-haptic communication system becomes unstable owing to buffer overflow. To solve this problem, an end-to-end flow controller is proposed. This controller determines the optimal transmission rate of visual data on the basis of the traffic conditions, which are estimated by the packets for haptic communication. Experimental results confirm that in the proposed method, a short packet-sending interval and a short delay are achieved under bandwidth change, and thus, high-precision visual-haptic communication is realized.
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Jamshid; Mahdizadeh, Kourosh; Afshar, Abbas
2004-08-01
Application of stochastic dynamic programming (SDP) models to reservoir optimization calls for state variables discretization. As an important variable discretization of reservoir storage volume has a pronounced effect on the computational efforts. The error caused by storage volume discretization is examined by considering it as a fuzzy state variable. In this approach, the point-to-point transitions between storage volumes at the beginning and end of each period are replaced by transitions between storage intervals. This is achieved by using fuzzy arithmetic operations with fuzzy numbers. In this approach, instead of aggregating single-valued crisp numbers, the membership functions of fuzzy numbers are combined. Running a simulated model with optimal release policies derived from fuzzy and non-fuzzy SDP models shows that a fuzzy SDP with a coarse discretization scheme performs as well as a classical SDP having much finer discretized space. It is believed that this advantage in the fuzzy SDP model is due to the smooth transitions between storage intervals which benefit from soft boundaries.
Karaismailoğlu, Eda; Dikmen, Zeliha Günnur; Akbıyık, Filiz; Karaağaoğlu, Ahmet Ergun
2018-04-30
Background/aim: Myoglobin, cardiac troponin T, B-type natriuretic peptide (BNP), and creatine kinase isoenzyme MB (CK-MB) are frequently used biomarkers for evaluating risk of patients admitted to an emergency department with chest pain. Recently, time- dependent receiver operating characteristic (ROC) analysis has been used to evaluate the predictive power of biomarkers where disease status can change over time. We aimed to determine the best set of biomarkers that estimate cardiac death during follow-up time. We also obtained optimal cut-off values of these biomarkers, which differentiates between patients with and without risk of death. A web tool was developed to estimate time intervals in risk. Materials and methods: A total of 410 patients admitted to the emergency department with chest pain and shortness of breath were included. Cox regression analysis was used to determine an optimal set of biomarkers that can be used for estimating cardiac death and to combine the significant biomarkers. Time-dependent ROC analysis was performed for evaluating performances of significant biomarkers and a combined biomarker during 240 h. The bootstrap method was used to compare statistical significance and the Youden index was used to determine optimal cut-off values. Results : Myoglobin and BNP were significant by multivariate Cox regression analysis. Areas under the time-dependent ROC curves of myoglobin and BNP were about 0.80 during 240 h, and that of the combined biomarker (myoglobin + BNP) increased to 0.90 during the first 180 h. Conclusion: Although myoglobin is not clinically specific to a cardiac event, in our study both myoglobin and BNP were found to be statistically significant for estimating cardiac death. Using this combined biomarker may increase the power of prediction. Our web tool can be useful for evaluating the risk status of new patients and helping clinicians in making decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavani, M.; Donnarumma, I.; Argan, A.
We report the results of an extensive search through the AGILE data for a gamma-ray counterpart to the LIGO gravitational-wave (GW) event GW150914. Currently in spinning mode, AGILE has the potential of cover 80% of the sky with its gamma-ray instrument, more than 100 times a day. It turns out that AGILE came within a minute of the event time of observing the accessible GW150914 localization region. Interestingly, the gamma-ray detector exposed ∼65% of this region during the 100 s time intervals centered at −100 and +300 s from the event time. We determine a 2 σ flux upper limitmore » in the band 50 MeV–10 GeV, UL = 1.9 × 10{sup −8} erg cm{sup −2} s{sup −1}, obtained ∼300 s after the event. The timing of this measurement is the fastest ever obtained for GW150914, and significantly constrains the electromagnetic emission of a possible high-energy counterpart. We also carried out a search for a gamma-ray precursor and delayed emission over five timescales ranging from minutes to days: in particular, we obtained an optimal exposure during the interval −150/−30 s. In all these observations, we do not detect a significant signal associated with GW150914. We do not reveal the weak transient source reported by Fermi -GBM 0.4 s after the event time. However, even though a gamma-ray counterpart of the GW150914 event was not detected, the prospects for future AGILE observations of GW sources are decidedly promising.« less
Detection of Chlorophyll and Leaf Area Index Dynamics from Sub-weekly Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Houborg, Rasmus; McCabe, Matthew F.; Angel, Yoseline; Middleton, Elizabeth M.
2016-01-01
Temporally rich hyperspectral time-series can provide unique time critical information on within-field variations in vegetation health and distribution needed by farmers to effectively optimize crop production. In this study, a dense time series of images were acquired from the Earth Observing-1 (EO-1) Hyperion sensor over an intensive farming area in the center of Saudi Arabia. After correction for atmospheric effects, optimal links between carefully selected explanatory hyperspectral vegetation indices and target vegetation characteristics were established using a machine learning approach. A dataset of in-situ measured leaf chlorophyll (Chll) and leaf area index (LAI), collected during five intensive field campaigns over a variety of crop types, were used to train the rule-based predictive models. The ability of the narrow-band hyperspectral reflectance information to robustly assess and discriminate dynamics in foliar biochemistry and biomass through empirical relationships were investigated. This also involved evaluations of the generalization and reproducibility of the predictions beyond the conditions of the training dataset. The very high temporal resolution of the satellite retrievals constituted a specifically intriguing feature that facilitated detection of total canopy Chl and LAI dynamics down to sub-weekly intervals. The study advocates the benefits associated with the availability of optimum spectral and temporal resolution spaceborne observations for agricultural management purposes.
NASA Astrophysics Data System (ADS)
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2016-02-01
In the manufacturing industry, several identical parts can be processed in batches, and setup time is needed between two consecutive batches. Since the processing times of batches are not always fixed during a scheduling period due to learning and deterioration effects, this research deals with batch scheduling problems with simultaneous learning and deterioration effects. The objective is to minimize total actual flow time, defined as a time interval between the arrival of all parts at the shop and their common due date. The decision variables are the number of batches, integer batch sizes, and the sequence of the resulting batches. This research proposes a heuristic algorithm based on the Lagrange Relaxation. The effectiveness of the proposed algorithm is determined by comparing the resulting solutions of the algorithm to the respective optimal solution obtained from the enumeration method. Numerical experience results show that the average of difference among the solutions is 0.05%.
A uniform management approach to optimize outcome in fetal growth restriction.
Seravalli, Viola; Baschat, Ahmet A
2015-06-01
A uniform approach to the diagnosis and management of fetal growth restriction (FGR) consistently produces better outcome, prevention of unanticipated stillbirth, and appropriate timing of delivery. Early-onset and late-onset FGR represent two distinct clinical phenotypes of placental dysfunction. Management challenges in early-onset FGR revolve around prematurity and coexisting maternal hypertensive disease, whereas in late-onset disease failure of diagnosis or surveillance leading to unanticipated stillbirth is the primary issue. Identifying the surveillance tests that have the highest predictive accuracy for fetal acidemia and establishing the appropriate monitoring interval to detect fetal deterioration is a high priority. Copyright © 2015 Elsevier Inc. All rights reserved.
Characterization and evaluation of an aeolian-photovoltaic system in operation
NASA Astrophysics Data System (ADS)
Bonfatti, F.; Calzolari, P. U.; Cardinali, G. C.; Vivanti, G.; Zani, A.
Data management, analysis techniques and results of performance monitoring of a prototype combined photovoltaic (PV)-wind turbine farm power plant in northern Italy are reported. Emphasis is placed on the PV I-V characteristics and irradiance and cell temperatures. Automated instrumentation monitors and records meteorological data and generator variables such as voltages, currents, output, battery electrolyte temperature, etc. Analysis proceeds by automated selection of I-V data for specific intervals of the year when other variables can be treated as constants. The technique permits characterization of generator performance, adjusting the power plant set points for optimal output, and tracking performance degradation over time.
Yamaguchi, Tetsuo; Kitai, Takeshi; Miyamoto, Takamichi; Kagiyama, Nobuyuki; Okumura, Takahiro; Kida, Keisuke; Oishi, Shogo; Akiyama, Eiichi; Suzuki, Satoshi; Yamamoto, Masayoshi; Yamaguchi, Junji; Iwai, Takamasa; Hijikata, Sadahiro; Masuda, Ryo; Miyazaki, Ryoichi; Hara, Nobuhiro; Nagata, Yasutoshi; Nozato, Toshihiro; Matsue, Yuya
2018-04-15
Guideline-directed medical therapy (GDMT) is recommended for patients with heart failure with reduced ejection fraction (HFrEF). However, the prognostic impact of medication optimization at the time of discharge in patients hospitalized with heart failure (HF) is unclear. We analyzed 534 patients (73 ± 13 years old) with HFrEF. The status of GDMT at the time of discharge (prescription of angiotensin converting enzyme inhibitor [ACE-I]/angiotensin receptor blocker [ARB] and β blocker [BB]) and its association with 1-year all-cause mortality and HF readmission were investigated. Patients were divided into 3 groups: those treated with both ACE-I/ARB and BB (Both group: n = 332, 62%), either ACE-I/ARB or BB (Either group: n = 169, 32%), and neither ACE-I/ARB nor BB (None group: n = 33, 6%), respectively. One-year mortality, but not 1-year HF readmission rate, was significantly different in the 3 groups, in favor of the Either and Both groups. A favorable impact of being on GDMT at the time of discharge on 1-year mortality was retained even after adjustment for covariates (Either group: hazard ratio [HR] 0.44, 95% confidence interval [CI] 0.21 to 0.90, p = 0.025 and Both group: HR 0.29, 95% CI 0.13-0.65, p = 0.002, vs None group). For 1-year HF readmission, no such association was found. In conclusion, optimization of GDMT before the time of discharge was associated with a lower 1-year mortality, but not with HF readmission rate, in patients hospitalized with HFrEF. Copyright © 2018 Elsevier Inc. All rights reserved.
Katritsis, Demosthenes G; Siontis, George C M; Kastrati, Adnan; van't Hof, Arnoud W J; Neumann, Franz-Josef; Siontis, Konstantinos C M; Ioannidis, John P A
2011-01-01
An invasive approach is superior to medical management for the treatment of patients with acute coronary syndromes without ST-segment elevation (NSTE-ACS), but the optimal timing of coronary angiography and subsequent intervention, if indicated, has not been settled. We conducted a meta-analysis of randomized trials addressing the optimal timing (early vs. delayed) of coronary angiography in NSTE-ACS. Four trials with 4013 patients were eligible (ABOARD, ELISA, ISAR-COOL, TIMACS), and data for longer follow-up periods than those published became available for this meta-analysis by the ELISA and ISAR-COOL investigators. The median time from admission or randomization to coronary angiography ranged from 1.16 to 14 h in the early and 20.8-86 h in the delayed strategy group. No statistically significant difference of risk of death [random effects risk ratio (RR) 0.85, 95% confidence interval (CI) 0.64-1.11] or myocardial infarction (MI) (RR 0.94, 95% CI 0.61-1.45) was detected between the two strategies. Early intervention significantly reduced the risk for recurrent ischaemia (RR 0.59, 95% CI 0.38-0.92, P = 0.02) and the duration of hospital stay (by 28%, 95% CI 22-35%, P < 0.001). Furthermore, decreased major bleeding events (RR 0.78, 95% CI 0.57-1.07, P = 0.13), and less major events (death, MI, or stroke) (RR 0.91, 95% CI 0.82-1.01, P = 0.09) were observed with the early strategy but these differences were not nominally significant. Early coronary angiography and potential intervention reduces the risk of recurrent ischaemia, and shortens hospital stay in patients with NSTE-ACS.
NASA Astrophysics Data System (ADS)
Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang
2016-03-01
At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.
Performance analysis of a laser propelled interorbital tansfer vehicle
NASA Technical Reports Server (NTRS)
Minovitch, M. A.
1976-01-01
Performance capabilities of a laser-propelled interorbital transfer vehicle receiving propulsive power from one ground-based transmitter was investigated. The laser transmits propulsive energy to the vehicle during successive station fly-overs. By applying a series of these propulsive maneuvers, large payloads can be economically transferred between low earth orbits and synchronous orbits. Operations involving the injection of large payloads onto escape trajectories are also studied. The duration of each successive engine burn must be carefully timed so that the vehicle reappears over the laser station to receive additional propulsive power within the shortest possible time. The analytical solution for determining these time intervals is presented, as is a solution to the problem of determining maximum injection payloads. Parameteric computer analysis based on these optimization studies is presented. The results show that relatively low beam powers, on the order of 50 MW to 60 MW, produce significant performance capabilities.
Demarteau, Nadia; Breuer, Thomas; Standaert, Baudouin
2012-04-01
Screening and vaccination against human papillomavirus (HPV) can protect against cervical cancer. Neither alone can provide 100% protection. Consequently it raises the important question about the most efficient combination of screening at specified time intervals and vaccination to prevent cervical cancer. Our objective was to identify the mix of cervical cancer prevention strategies (screening and/or vaccination against HPV) that achieves maximum reduction in cancer cases within a fixed budget. We assessed the optimal mix of strategies for the prevention of cervical cancer using an optimization program. The evaluation used two models. One was a Markov cohort model used as the evaluation model to estimate the costs and outcomes of 52 different prevention strategies. The other was an optimization model in which the results of each prevention strategy of the previous model were entered as input data. The latter model determined the combination of the different prevention options to minimize cervical cancer under budget, screening coverage and vaccination coverage constraints. We applied the model in two countries with different healthcare organizations, epidemiology, screening practices, resource settings and treatment costs: the UK and Brazil. 100,000 women aged 12 years and above across the whole population over a 1-year period at steady state were included. The intervention was papanicolaou (Pap) smear screening programmes and/or vaccination against HPV with the bivalent HPV 16/18 vaccine (Cervarix® [Cervarix is a registered trademark of the GlaxoSmithKline group of companies]). The main outcome measures were optimal distribution of the population between different interventions (screening, vaccination, screening plus vaccination and no screening or vaccination) with the resulting number of cervical cancer and associated costs. In the base-case analysis (= same budget as today), the optimal prevention strategy would be, after introducing vaccination with a coverage rate of 80% in girls aged 12 years and retaining screening coverage at pre-vaccination levels (65% in the UK, 50% in Brazil), to increase the screening interval to 6 years (from 3) in the UK and to 5 years (from 3) in Brazil. This would result in a reduction of cervical cancer by 41% in the UK and by 54% in Brazil from pre-vaccination levels with no budget increase. Sensitivity analysis shows that vaccination alone at 80% coverage with no screening would achieve a cervical cancer reduction rate of 20% in the UK and 43% in Brazil compared with the pre-vaccination situation with a budget reduction of 30% and 14%, respectively. In both countries, the sharp reduction in cervical cancer is seen when the vaccine coverage rate exceeds the maximum screening coverage rate, or when screening coverage rate exceeds the maximum vaccine coverage rate, while maintaining the budget. As with any model, there are limitations to the value of predictions depending upon the assumptions made in each model. Spending the same budget that was used for screening and treatment of cervical cancer in the pre-vaccination era, results of the optimization program show that it would be possible to substantially reduce the number of cases by implementing an optimal combination of HPV vaccination (80% coverage) and screening at pre-vaccination coverage (65% UK, 50% Brazil) while extending the screening interval to every 6 years in the UK and 5 years in Brazil.
Ratio-based lengths of intervals to improve fuzzy time series forecasting.
Huarng, Kunhuang; Yu, Tiffany Hui-Kuang
2006-04-01
The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.
Martinon, Alice; Cronin, Ultan P; Wilkinson, Martin G
2012-01-01
In this article, four types of standards were assessed in a SYBR Green-based real-time PCR procedure for the quantification of Staphylococcus aureus (S. aureus) in DNA samples. The standards were purified S. aureus genomic DNA (type A), circular plasmid DNA containing a thermonuclease (nuc) gene fragment (type B), DNA extracted from defined populations of S. aureus cells generated by Fluorescence Activated Cell Sorting (FACS) technology with (type C) or without purification of DNA by boiling (type D). The optimal efficiency of 2.016 was obtained on Roche LightCycler(®) 4.1. software for type C standards, whereas the lowest efficiency (1.682) corresponded to type D standards. Type C standards appeared to be more suitable for quantitative real-time PCR because of the use of defined populations for construction of standard curves. Overall, Fieller Confidence Interval algorithm may be improved for replicates having a low standard deviation in Cycle Threshold values such as found for type B and C standards. Stabilities of diluted PCR standards stored at -20°C were compared after 0, 7, 14 and 30 days and were lower for type A or C standards compared with type B standards. However, FACS generated standards may be useful for bacterial quantification in real-time PCR assays once optimal storage and temperature conditions are defined.
Friis, C; Rothman, J P; Burcharth, J; Rosenberg, J
2018-06-01
Endoscopic retrograde cholangiopancreatography followed by laparoscopic cholecystectomy is often used as definitive treatment for common bile duct stones. The aim of this study was to investigate the optimal time interval between endoscopic retrograde cholangiopancreatography and laparoscopic cholecystectomy. PubMed and Embase were searched for studies comparing different time delays between endoscopic retrograde cholangiopancreatography and laparoscopic cholecystectomy. Observational studies and randomized controlled trials were included. Primary outcome was conversion rate from laparoscopic to open cholecystectomy and secondary outcomes were complications, mortality, operating time, and length of stay. A total of 14 studies with a total of 1930 patients were included. The pooled estimate revealed an increase from a 4.2% conversion rate when laparoscopic cholecystectomy was performed within 24 h of endoscopic retrograde cholangiopancreatography to 7.6% for 24-72 h delay to 12.3% when performed within 2 weeks, to 12.3% for 2-6 weeks, and to a 14% conversion rate when operation was delayed more than 6 weeks. According to this systematic review, it is preferable to perform cholecystectomy within 24 h of endoscopic retrograde cholangiopancreatography to reduce conversion rate. Early laparoscopic cholecystectomy does not increase mortality, perioperative complications, or length of stay and on the contrary it reduces the risk of reoccurrence and progression of disease in the delay between endoscopic retrograde cholangiopancreatography and laparoscopic cholecystectomy.
ERIC Educational Resources Information Center
Suzuki, Yuichi
2017-01-01
This study examined optimal learning schedules for second language (L2) acquisition of a morphological structure. Sixty participants studied the simple and complex morphological rules of a novel miniature language system so as to use them for oral production. They engaged in four training sessions in either shorter spaced (3.3-day interval) or…
Interresponse Time Structures in Variable-Ratio and Variable-Interval Schedules
ERIC Educational Resources Information Center
Bowers, Matthew T.; Hill, Jade; Palya, William L.
2008-01-01
The interresponse-time structures of pigeon key pecking were examined under variable-ratio, variable-interval, and variable-interval plus linear feedback schedules. Whereas the variable-ratio and variable-interval plus linear feedback schedules generally resulted in a distinct group of short interresponse times and a broad distribution of longer…
Ramos, Joyce S; Dalleck, Lance C; Ramos, Maximiano V; Borrani, Fabio; Roberts, Llion; Gomersall, Sjaan; Beetham, Kassia S; Dias, Katrin A; Keating, Shelley E; Fassett, Robert G; Sharman, James E; Coombes, Jeff S
2016-10-01
Decreased aortic reservoir function leads to a rise in aortic reservoir pressure that is an independent predictor of cardiovascular events. Although there is evidence that high-intensity interval training (HIIT) would be useful to improve aortic reservoir pressure, the optimal dose of high-intensity exercise to improve aortic reservoir function has yet to be investigated. Therefore, this study compared the effect of different volumes of HIIT and moderate-intensity continuous training (MICT) on aortic reservoir pressure in participants with the metabolic syndrome (MetS). Fifty individuals with MetS were randomized into one of the following 16-week training programs: MICT [n = 17, 30 min at 60-70% peak heart rate (HRpeak), five times/week]; 4 × 4-min high-intensity interval training (4HIIT) (n = 15, 4 × 4 min bouts at 85-95% HRpeak, interspersed with 3 min of active recovery at 50-70% HRpeak, three times/week); and 1 × 4-min high-intensity interval training (1HIIT) (n = 18, 1 × 4 min bout at 85-95% HRpeak, three times/week). Aortic reservoir pressure was calculated from radial applanation tonometry. Although not statistically significant, there was a trend for a small-to-medium group × time interaction effect on aortic reservoir pressure, indicating a positive adaptation following 1HIIT compared with 4HIIT and MICT [F (2,46) = 2.9, P = 0.07, η = 0.06]. This is supported by our within-group analysis wherein only 1HIIT significantly decreased aortic reservoir pressure from pre to postintervention (pre-post: 1HIIT 33 ± 16 to 31 ± 13, P = 0.03; MICT 29 ± 9-28 ± 8, P = 0.78; 4HIIT 28 ± 10-30 ± 9 mmHg, P = 0.10). Three sessions of 4 min of high-intensity exercise per week (12 min/week) was sufficient to improve aortic reservoir pressure, and thus may be a time-efficient exercise modality for reducing cardiovascular risk in individuals with MetS.
Successful Optimization of Adalimumab Therapy in Refractory Uveitis Due to Behçet's Disease.
Martín-Varillas, José Luis; Calvo-Río, Vanesa; Beltrán, Emma; Sánchez-Bursón, Juan; Mesquida, Marina; Adán, Alfredo; Hernandez, María Victoria; Garfella, Marisa Hernández; Pascual, Elia Valls; Martínez-Costa, Lucía; Sellas-Fernández, Agustí; Cordero-Coma, Miguel; Díaz-Llopis, Manuel; Gallego, Roberto; Salom, David; Ortego, Norberto; García-Serrano, José L; Callejas-Rubio, José-Luis; Herreras, José M; García-Aparicio, Ángel; Maíz, Olga; Blanco, Ana; Torre, Ignacio; Díaz-Valle, David; Pato, Esperanza; Aurrecoechea, Elena; Caracuel, Miguel A; Gamero, Fernando; Minguez, Enrique; Carrasco-Cubero, Carmen; Olive, Alejandro; Vázquez, Julio; Ruiz-Moreno, Oscar; Manero, Javier; Muñoz-Fernández, Santiago; Martinez, Myriam Gandía; Rubio-Romero, Esteban; Toyos-Sáenz de Miera, F Javier; López Longo, Francisco Javier; Nolla, Joan M; Revenga, Marcelino; González-Vela, Carmen; Loricera, Javier; Atienza-Mateo, Belén; Demetrio-Pablo, Rosalía; Hernández, José Luis; González-Gay, Miguel A; Blanco, Ricardo
2018-03-27
To assess efficacy, safety, and cost-effectiveness of adalimumab (ADA) therapy optimization in a large series of patients with uveitis due to Behçet disease (BD) who achieved remission after the use of this biologic agent. Open-label multicenter study of ADA-treated patients with BD uveitis refractory to conventional immunosuppressants. Sixty-five of 74 patients with uveitis due to BD, who achieved remission after a median ADA duration of 6 (range, 3-12) months. ADA was optimized in 23 (35.4%) of them. This biologic agent was maintained at a dose of 40 mg/subcutaneously/2 weeks in the remaining 42 patients. After remission, based on a shared decision between the patient and the treating physician, ADA was optimized. When agreement between patient and physician was reached, optimization was performed by prolonging the ADA dosing interval progressively. Comparison between optimized and nonoptimized patients was performed. Efficacy, safety, and cost-effectiveness in optimized and nonoptimized groups. To determine efficacy, intraocular inflammation (anterior chamber cells, vitritis, and retinal vasculitis), macular thickness, visual acuity, and the sparing effect of glucocorticoids were assessed. No demographic or ocular differences were found at the time of ADA onset between the optimized and the nonoptimized groups. Most ocular outcomes were similar after a mean ± standard deviation follow-up of 34.7±13.3 and 26±21.3 months in the optimized and nonoptimized groups, respectively. However, relevant adverse effects were only seen in the nonoptimized group (lymphoma, pneumonia, severe local reaction at the injection site, and bacteremia by Escherichia coli, 1 each). Moreover, the mean ADA treatment costs were lower in the optimized group than in the nonoptimized group (6101.25 euros/patient/year vs. 12 339.48; P < 0.01). ADA optimization in BD uveitis refractory to conventional therapy is effective, safe, and cost-effective. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Stock optimizing: maximizing reinforcers per session on a variable-interval schedule.
Silberberg, A; Bauman, R; Hursh, S
1993-01-01
In Experiment 1, 2 monkeys earned their daily food ration by pressing a key that delivered food according to a variable-interval 3-min schedule. In Phases 1 and 4, sessions ended after 3 hr. In Phases 2 and 3, sessions ended after a fixed number of responses that reduced food intake and body weights from levels during Phases 1 and 4. Monkeys responded at higher rates and emitted more responses per food delivery when the food earned in a session was reduced. In Experiment 2, monkeys earned their daily food ration by depositing tokens into the response panel. Deposits delivered food according to a variable-interval 3-min schedule. When the token supply was unlimited (Phases 1, 3, and 5), sessions ended after 3 hr. In Phases 2 and 4, sessions ended after 150 tokens were deposited, resulting in a decrease in food intake and body weight. Both monkeys responded at lower rates and emitted fewer responses per food delivery when the food earned in a session was reduced. Experiment 1's results are consistent with a strength account, according to which the phases that reduced body weights increased food's value and therefore increased subjects' response rates. The results of Experiment 2 are consistent with an optimizing strategy, because lowering response rates when food is restricted defends body weight on variable-interval schedules. These contrasting results may be attributed to the discriminability of the contingency between response number and the end of a session being greater in Experiment 2 than in Experiment 1. In consequence, subjects lowered their response rates in order to increase the number of reinforcers per session (stock optimizing). PMID:8454960
Parallel satellite orbital situational problems solver for space missions design and control
NASA Astrophysics Data System (ADS)
Atanassov, Atanas Marinov
2016-11-01
Solving different scientific problems for space applications demands implementation of observations, measurements or realization of active experiments during time intervals in which specific geometric and physical conditions are fulfilled. The solving of situational problems for determination of these time intervals when the satellite instruments work optimally is a very important part of all activities on every stage of preparation and realization of space missions. The elaboration of universal, flexible and robust approach for situation analysis, which is easily portable toward new satellite missions, is significant for reduction of missions' preparation times and costs. Every situation problem could be based on one or more situation conditions. Simultaneously solving different kinds of situation problems based on different number and types of situational conditions, each one of them satisfied on different segments of satellite orbit requires irregular calculations. Three formal approaches are presented. First one is related to situation problems description that allows achieving flexibility in situation problem assembling and presentation in computer memory. The second formal approach is connected with developing of situation problem solver organized as processor that executes specific code for every particular situational condition. The third formal approach is related to solver parallelization utilizing threads and dynamic scheduling based on "pool of threads" abstraction and ensures a good load balance. The developed situation problems solver is intended for incorporation in the frames of multi-physics multi-satellite space mission's design and simulation tools.
Effect of aeration interval on oxygen consumption and GHG emission during pig manure composting.
Zeng, Jianfei; Yin, Hongjie; Shen, Xiuli; Liu, Ning; Ge, Jinyi; Han, Lujia; Huang, Guangqun
2018-02-01
To verify the optimal aeration interval for oxygen supply and consumption and investigate the effect of aeration interval on GHG emission, reactor-scale composting was conducted with different aeration intervals (0, 10, 30 and 50 min). Although O 2 was sufficiently supplied during aeration period, it could be consumed to <10 vol% only when the aeration interval was 50 min, indicating that an aeration interval more than 50 min would be inadvisable. Compared to continuous aeration, reductions of the total CH 4 and N 2 O emissions as well as the total GHG emission equivalent by 22.26-61.36%, 8.24-49.80% and 12.36-53.20%, respectively, was achieved through intermittent aeration. Specifically, both the total CH 4 and N 2 O emissions as well as the total GHG emission equivalent were inversely proportional to the duration of aeration interval (R 2 > 0.902), suggesting that lengthening the duration of aeration interval to some extent could effectively reduce GHG emission. Copyright © 2017 Elsevier Ltd. All rights reserved.
A novel approach based on preference-based index for interval bilevel linear programming problem.
Ren, Aihong; Wang, Yuping; Xue, Xingsi
2017-01-01
This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.
Not All Prehospital Time is Equal: Influence of Scene Time on Mortality
Brown, Joshua B.; Rosengart, Matthew R.; Forsythe, Raquel M.; Reynolds, Benjamin R.; Gestring, Mark L.; Hallinan, William M.; Peitzman, Andrew B.; Billiar, Timothy R.; Sperry, Jason L.
2016-01-01
Background Trauma is time-sensitive and minimizing prehospital (PH) time is appealing. However, most studies have not linked increasing PH time with worse outcomes, as raw PH times are highly variable. It is unclear whether specific PH time patterns affect outcomes. Our objective was to evaluate the association of PH time interval distribution with mortality. Methods Patients transported by EMS in the Pennsylvania trauma registry 2000-2013 with total prehospital time (TPT)≥20min were included. TPT was divided into three PH time intervals: response, scene, and transport time. The number of minutes in each PH time interval was divided by TPT to determine the relative proportion each interval contributed to TPT. A prolonged interval was defined as any one PH interval contributing ≥50% of TPT. Patients were classified by prolonged PH interval or no prolonged PH interval (all intervals<50% of TPT). Patients were matched for TPT and conditional logistic regression determined the association of mortality with PH time pattern, controlling for confounders. PH interventions were explored as potential mediators, and prehospital triage criteria used identify patients with time-sensitive injuries. Results There were 164,471 patients included. Patients with prolonged scene time had increased odds of mortality (OR 1.21; 95%CI 1.02–1.44, p=0.03). Prolonged response, transport, and no prolonged interval were not associated with mortality. When adjusting for mediators including extrication and PH intubation, prolonged scene time was no longer associated with mortality (OR 1.06; 0.90–1.25, p=0.50). Together these factors mediated 61% of the effect between prolonged scene time and mortality. Mortality remained associated with prolonged scene time in patients with hypotension, penetrating injury, and flail chest. Conclusions Prolonged scene time is associated with increased mortality. PH interventions partially mediate this association. Further study should evaluate whether these interventions drive increased mortality because they prolong scene time or by another mechanism, as reducing scene time may be a target for intervention. Level of Evidence IV, prognostic study PMID:26886000
Ou, Guoliang; Tan, Shukui; Zhou, Min; Lu, Shasha; Tao, Yinghui; Zhang, Zuo; Zhang, Lu; Yan, Danping; Guan, Xingliang; Wu, Gang
2017-12-15
An interval chance-constrained fuzzy land-use allocation (ICCF-LUA) model is proposed in this study to support solving land resource management problem associated with various environmental and ecological constraints at a watershed level. The ICCF-LUA model is based on the ICCF (interval chance-constrained fuzzy) model which is coupled with interval mathematical model, chance-constrained programming model and fuzzy linear programming model and can be used to deal with uncertainties expressed as intervals, probabilities and fuzzy sets. Therefore, the ICCF-LUA model can reflect the tradeoff between decision makers and land stakeholders, the tradeoff between the economical benefits and eco-environmental demands. The ICCF-LUA model has been applied to the land-use allocation of Wujiang watershed, Guizhou Province, China. The results indicate that under highly land suitable conditions, optimized area of cultivated land, forest land, grass land, construction land, water land, unused land and landfill in Wujiang watershed will be [5015, 5648] hm 2 , [7841, 7965] hm 2 , [1980, 2056] hm 2 , [914, 1423] hm 2 , [70, 90] hm 2 , [50, 70] hm 2 and [3.2, 4.3] hm 2 , the corresponding system economic benefit will be between 6831 and 7219 billion yuan. Consequently, the ICCF-LUA model can effectively support optimized land-use allocation problem in various complicated conditions which include uncertainties, risks, economic objective and eco-environmental constraints. Copyright © 2017 Elsevier Ltd. All rights reserved.
Functional near infrared spectroscopy for awake monkey to accelerate neurorehabilitation study
NASA Astrophysics Data System (ADS)
Kawaguchi, Hiroshi; Higo, Noriyuki; Kato, Junpei; Matsuda, Keiji; Yamada, Toru
2017-02-01
Functional near-infrared spectroscopy (fNIRS) is suitable for measuring brain functions during neurorehabilitation because of its portability and less motion restriction. However, it is not known whether neural reconstruction can be observed through changes in cerebral hemodynamics. In this study, we modified an fNIRS system for measuring the motor function of awake monkeys to study cerebral hemodynamics during neurorehabilitation. Computer simulation was performed to determine the optimal fNIRS source-detector interval for monkey motor cortex. Accurate digital phantoms were constructed based on anatomical magnetic resonance images. Light propagation based on the diffusion equation was numerically calculated using the finite element method. The source-detector pair was placed on the scalp above the primary motor cortex. Four different interval values (10, 15, 20, 25 mm) were examined. The results showed that the detected intensity decreased and the partial optical path length in gray matter increased with an increase in the source-detector interval. We found that 15 mm is the optimal interval for the fNIRS measurement of monkey motor cortex. The preliminary measurement was performed on a healthy female macaque monkey using fNIRS equipment and custom-made optodes and optode holder. The optodes were attached above bilateral primary motor cortices. Under the awaking condition, 10 to 20 trials of alternated single-sided hand movements for several seconds with intervals of 10 to 30 s were performed. Increases and decreases in oxy- and deoxyhemoglobin concentration were observed in a localized area in the hemisphere contralateral to the moved forelimb.
Panek, Petr; Prochazka, Ivan
2007-09-01
This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.
Interval Timing Accuracy and Scalar Timing in C57BL/6 Mice
Buhusi, Catalin V.; Aziz, Dyana; Winslow, David; Carter, Rickey E.; Swearingen, Joshua E.; Buhusi, Mona C.
2010-01-01
In many species, interval timing behavior is accurate—appropriate estimated durations—and scalar—errors vary linearly with estimated durations. While accuracy has been previously examined, scalar timing has not been yet clearly demonstrated in house mice (Mus musculus), raising concerns about mouse models of human disease. We estimated timing accuracy and precision in C57BL/6 mice, the most used background strain for genetic models of human disease, in a peak-interval procedure with multiple intervals. Both when timing two intervals (Experiment 1) or three intervals (Experiment 2), C57BL/6 mice demonstrated varying degrees of timing accuracy. Importantly, both at individual and group level, their precision varied linearly with the subjective estimated duration. Further evidence for scalar timing was obtained using an intraclass correlation statistic. This is the first report of consistent, reliable scalar timing in a sizable sample of house mice, thus validating the PI procedure as a valuable technique, the intraclass correlation statistic as a powerful test of the scalar property, and the C57BL/6 strain as a suitable background for behavioral investigations of genetically engineered mice modeling disorders of interval timing. PMID:19824777
Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek
2015-02-01
To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.
Ouyang, Qin; Zhao, Jiewen; Pan, Wenxiu; Chen, Quansheng
2016-01-01
A portable and low-cost spectral analytical system was developed and used to monitor real-time process parameters, i.e. total sugar content (TSC), alcohol content (AC) and pH during rice wine fermentation. Various partial least square (PLS) algorithms were implemented to construct models. The performance of a model was evaluated by the correlation coefficient (Rp) and the root mean square error (RMSEP) in the prediction set. Among the models used, the synergy interval PLS (Si-PLS) was found to be superior. The optimal performance by the Si-PLS model for the TSC was Rp = 0.8694, RMSEP = 0.438; the AC was Rp = 0.8097, RMSEP = 0.617; and the pH was Rp = 0.9039, RMSEP = 0.0805. The stability and reliability of the system, as well as the optimal models, were verified using coefficients of variation, most of which were found to be less than 5%. The results suggest this portable system is a promising tool that could be used as an alternative method for rapid monitoring of process parameters during rice wine fermentation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Heller, Melina; Vitali, Luciano; Oliveira, Marcone Augusto Leal; Costa, Ana Carolina O; Micke, Gustavo Amadeu
2011-07-13
The present study aimed to develop a methodology using capillary electrophoresis for the determination of sinapaldehyde, syringaldehyde, coniferaldehyde, and vanillin in whiskey samples. The main objective was to obtain a screening method to differentiate authentic samples from seized samples suspected of being false using the phenolic aldehydes as chemical markers. The optimized background electrolyte was composed of 20 mmol L(-1) sodium tetraborate with 10% MeOH at pH 9.3. The study examined two kinds of sample stacking, using a long-end injection mode: normal sample stacking (NSM) and sample stacking with matrix removal (SWMR). In SWMR, the optimized injection time of the samples was 42 s (SWMR42); at this time, no matrix effects were observed. Values of r were >0.99 for the both methods. The LOD and LOQ were better than 100 and 330 mg mL(-1) for NSM and better than 22 and 73 mg L(-1) for SWMR. The CE-UV reliability in the aldehyde analysis in the real sample was compared statistically with LC-MS/MS methodology, and no significant differences were found, with a 95% confidence interval between the methodologies.
Chiu, Singa Wang; Huang, Chao-Chih; Chiang, Kuo-Wei; Wu, Mei-Fang
2015-01-01
Transnational companies, operating in extremely competitive global markets, always seek to lower different operating costs, such as inventory holding costs in their intra- supply chain system. This paper incorporates a cost reducing product distribution policy into an intra-supply chain system with multiple sales locations and quality assurance studied by [Chiu et al., Expert Syst Appl, 40:2669-2676, (2013)]. Under the proposed cost reducing distribution policy, an added initial delivery of end items is distributed to multiple sales locations to meet their demand during the production unit's uptime and rework time. After rework when the remaining production lot goes through quality assurance, n fixed quantity installments of finished items are then transported to sales locations at a fixed time interval. Mathematical modeling and optimization techniques are used to derive closed-form optimal operating policies for the proposed system. Furthermore, the study demonstrates significant savings in stock holding costs for both the production unit and sales locations. Alternative of outsourcing product delivery task to an external distributor is analyzed to assist managerial decision making in potential outsourcing issues in order to facilitate further reduction in operating costs.
VARIABLE TIME-INTERVAL GENERATOR
Gross, J.E.
1959-10-31
This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.
Opatz, Chad C.; Dinicola, Richard S.
2018-05-21
Operable Unit 2, Area 8, at Naval Base Kitsap, Keyport is the site of a former chrome-plating facility that released metals (primarily chromium and cadmium), chlorinated volatile organic compounds, and petroleum compounds into the local environment. To ensure long-term protectiveness, as stipulated in the Fourth Five-Year Review for the site, Naval Facilities Engineering Command Northwest collaborated with the U.S. Environmental Protection Agency, the Washington State Department of Ecology, and the Suquamish Tribe, to collect data to monitor the contamination left in place and to ensure the site does not pose a risk to human health or the environment. To support these efforts, refined information was needed on the interaction of fresh groundwater with seawater in response to the up-to 13-ft tidal fluctuations at this nearshore site adjacent to Port Orchard Bay. The information was analyzed to meet the primary objective of this investigation, which was to determine the optimal time during the semi-diurnal and the neap-spring tidal cycles to sample groundwater for freshwater contaminants in Area 8 monitoring wells.Groundwater levels and specific conductance in five monitoring wells, along with marine water-levels (tidal levels) in Port Orchard Bay, were monitored every 15 minutes during a 3-week duration to determine how nearshore groundwater responds to tidal forcing. Time series data were collected from October 24, 2017, to November 16, 2017, a period that included neap and spring tides. Vertical profiles of specific conductance were also measured once in the screened interval of each well prior to instrument deployment to determine if a freshwater/saltwater interface was present in the well during that particular time.The vertical profiles of specific conductance were measured only one time during an ebbing tide at approximately the top, middle, and bottom of the saturated thickness within the screened interval of each well. The landward-most well, MW8-8, was completely freshwater, while one of the most seaward wells, MW8-9, was completely saline. A distinct saltwater interface was measured in the three other shallow wells (MW8-11, MW8-12, and MW8-14), with the topmost groundwater occurring fresh underlain by higher conductivity water.Lag times between minimum spring-tide level and minimum groundwater levels in wells ranged from about 2 to 4.5 hours in the less-than 20-ft deep wells screened across the water table, and was about 7 hours for the single 48-ft deep well screened below the water table. Those lag times were surprisingly long considering the wells are all located within 200-ft of the shoreline and the local geology is largely coarse-grained glacial outwash deposits. Various manmade subsurface features, such as slurry walls and backfilled excavations, likely influence and confuse the connectivity between seawater and groundwater.The specific-conductance time-series data showed clear evidence of substantial saltwater intrusion into the screened intervals of most shallow wells. Unexpectedly, the intrusion was associated with the neap part of the tidal cycle around November 13–16, when relatively low barometric pressure and high southerly winds led to the highest high and low tides measured during the monitoring period. The data consistently indicated that the groundwater had the lowest specific conductance (was least mixed with seawater) during the prior neap tides around October 30, the same period when the shallow groundwater levels were lowest. Although the specific conductance response is somewhat different between wells, the data do suggest that it is the heights of the actual high-high and low-low tides, regardless of whether or not they occur during the neap or spring part of the cycle, that allows seawater intrusion into the nearshore aquifer at Area 8.With all the data taken into consideration, the optimal time for sampling the shallow monitoring wells at Area 8 would be centered on a 2–5-hour period following the predicted low-low tide during neap tide, with due consideration of local atmospheric pressure and wind conditions that have the potential to generate tides that can be substantially higher than those predicted from lunar-solar tidal forces. The optimal time for sampling the deeper monitoring wells at Area 8 would be during the 6–8-hour period following a predicted low-low tide, also during the neap tide part of the tidal cycle. The specific time window to sample each well following a low tide can be found in table 5. Those periods are when groundwater in the wells is most fresh and least diluted by seawater intrusion. In addition to timing, consideration should be given to collecting undisturbed samples from the top of the screened interval (or top of the water table if below the top of the interval) to best characterize contaminant concentrations in freshwater. A downhole conductivity probe could be used to identify the saltwater interface, above which would be the ideal depth for sampling.
Robust guaranteed-cost adaptive quantum phase estimation
NASA Astrophysics Data System (ADS)
Roy, Shibdas; Berry, Dominic W.; Petersen, Ian R.; Huntington, Elanor H.
2017-05-01
Quantum parameter estimation plays a key role in many fields like quantum computation, communication, and metrology. Optimal estimation allows one to achieve the most precise parameter estimates, but requires accurate knowledge of the model. Any inevitable uncertainty in the model parameters may heavily degrade the quality of the estimate. It is therefore desired to make the estimation process robust to such uncertainties. Robust estimation was previously studied for a varying phase, where the goal was to estimate the phase at some time in the past, using the measurement results from both before and after that time within a fixed time interval up to current time. Here, we consider a robust guaranteed-cost filter yielding robust estimates of a varying phase in real time, where the current phase is estimated using only past measurements. Our filter minimizes the largest (worst-case) variance in the allowable range of the uncertain model parameter(s) and this determines its guaranteed cost. It outperforms in the worst case the optimal Kalman filter designed for the model with no uncertainty, which corresponds to the center of the possible range of the uncertain parameter(s). Moreover, unlike the Kalman filter, our filter in the worst case always performs better than the best achievable variance for heterodyne measurements, which we consider as the tolerable threshold for our system. Furthermore, we consider effective quantum efficiency and effective noise power, and show that our filter provides the best results by these measures in the worst case.
NASA Technical Reports Server (NTRS)
Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.
1994-01-01
Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.
Parker, H L; Tucker, E; Blackshaw, E; Hoad, C L; Marciani, L; Perkins, A; Menne, D; Fox, M
2017-11-01
Current investigations of stomach function are based on small test meals that do not reliably induce symptoms and analysis techniques that rarely detect clinically relevant dysfunction. This study presents the reference intervals of the modular "Nottingham test meal" (NTM) for assessment of gastric function by gamma scintigraphy (GSc) in a representative population of healthy volunteers (HVs) stratified for age and sex. The NTM comprises 400 mL liquid nutrient (0.75 kcal/mL) and an optional solid component (12 solid agar-beads (0 kcal). Filling and dyspeptic sensations were documented by 100 mm visual analogue scale (VAS). Gamma scintigraphy parameters that describe early and late phase Gastric emptying (GE) were calculated from validated models. Gastric emptying (GE) of the liquid component was measured in 73 HVs (male 34; aged 45±20). The NTM produced normal postprandial fullness (VAS ≥30 in 41/74 subjects). Dyspeptic symptoms were rare (VAS ≥30 in 2/74 subjects). Gastric emptying half-time with the Liquid- and Solid-component -NTM was median 44 (95% reference interval 28-78) minutes and 162 (144-193) minutes, respectively. Gastric accommodation was assessed by the ratio of the liquid-NTM retained in the proximal:total stomach and by Early phase emptying assessed by gastric volume after completing the meal (GCV0). No consistent effect of anthropometric measures on GE parameters was present. Reference intervals are presented for GSc measurements of gastric motor and sensory function assessed by the NTM. Studies involving patients are required to determine whether the reference interval range offers optimal diagnostic sensitivity and specificity. © 2017 The Authors. Neurogastroenterology & Motility Published by John Wiley & Sons Ltd.
Research on Taxiway Path Optimization Based on Conflict Detection
Zhou, Hang; Jiang, Xinxin
2015-01-01
Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency. PMID:26226485
Research on Taxiway Path Optimization Based on Conflict Detection.
Zhou, Hang; Jiang, Xinxin
2015-01-01
Taxiway path planning is one of the effective measures to make full use of the airport resources, and the optimized paths can ensure the safety of the aircraft during the sliding process. In this paper, the taxiway path planning based on conflict detection is considered. Specific steps are shown as follows: firstly, make an improvement on A * algorithm, the conflict detection strategy is added to search for the shortest and safe path in the static taxiway network. Then, according to the sliding speed of aircraft, a time table for each node is determined and the safety interval is treated as the constraint to judge whether there is a conflict or not. The intelligent initial path planning model is established based on the results. Finally, make an example in an airport simulation environment, detect and relieve the conflict to ensure the safety. The results indicate that the model established in this paper is effective and feasible. Meanwhile, make comparison with the improved A*algorithm and other intelligent algorithms, conclude that the improved A*algorithm has great advantages. It could not only optimize taxiway path, but also ensure the safety of the sliding process and improve the operational efficiency.
Krall, Scott P; Cornelius, Angela P; Addison, J Bruce
2014-03-01
To analyze the correlation between the many different emergency department (ED) treatment metric intervals and determine if the metrics directly impacted by the physician correlate to the "door to room" interval in an ED (interval determined by ED bed availability). Our null hypothesis was that the cause of the variation in delay to receiving a room was multifactorial and does not correlate to any one metric interval. We collected daily interval averages from the ED information system, Meditech©. Patient flow metrics were collected on a 24-hour basis. We analyzed the relationship between the time intervals that make up an ED visit and the "arrival to room" interval using simple correlation (Pearson Correlation coefficients). Summary statistics of industry standard metrics were also done by dividing the intervals into 2 groups, based on the average ED length of stay (LOS) from the National Hospital Ambulatory Medical Care Survey: 2008 Emergency Department Summary. Simple correlation analysis showed that the doctor-to-discharge time interval had no correlation to the interval of "door to room (waiting room time)", correlation coefficient (CC) (CC=0.000, p=0.96). "Room to doctor" had a low correlation to "door to room" CC=0.143, while "decision to admitted patients departing the ED time" had a moderate correlation of 0.29 (p <0.001). "New arrivals" (daily patient census) had a strong correlation to longer "door to room" times, 0.657, p<0.001. The "door to discharge" times had a very strong correlation CC=0.804 (p<0.001), to the extended "door to room" time. Physician-dependent intervals had minimal correlation to the variation in arrival to room time. The "door to room" interval was a significant component to the variation in "door to discharge" i.e. LOS. The hospital-influenced "admit decision to hospital bed" i.e. hospital inpatient capacity, interval had a correlation to delayed "door to room" time. The other major factor affecting department bed availability was the "total patients per day." The correlation to the increasing "door to room" time also reflects the effect of availability of ED resources (beds) on the patient evaluation time. The time that it took for a patient to receive a room appeared more dependent on the system resources, for example, beds in the ED, as well as in the hospital, than on the physician.
Huang, Naiyan; Cheng, Gang; Li, Xiaosong; Gu, Ying; Liu, Fanguang; Zhong, Qiuhai; Wang, Ying; Zen, Jin; Qiu, Haixia; Chen, Hongxia
2008-06-01
We established mathematical models of photodynamic therapy (PDT) on port wine stains (PWS) to observe the effect of drug-light-interval (DLI) and optimize light dose. The mathematical simulations included determining (1) the distribution of laser light by Monte Carlo model, (2) the change of photosensitizer concentration in PWS vessels by a pharmacokinetics equation, (3) the change of photosensitizer distribution in tissue outside the vessels by a diffuse equation and photobleaching equation, and (4) the change of tissue oxygen concentration by the Fick's law with a consideration of the oxygen consumption during PDT. The concentration of singlet oxygen in the tissue model was calculated by the finite difference method. To validate those models, a PWS lesion of the same patient was divided into two areas and subjected to different DLIs and treated with different energy density. The color of lesion was assessed 8-12 weeks later. The simulation indicated the singlet oxygen concentration of the second treatment area (DLI=40 min) was lower than that of the first treatment area (DLI=0 min). However, it would be increased to a level similar to that of the first treatment area if the light irradiation time of the second treatment area was prolonged from 40 min to 55 min. Clinical results were consistent with the results predicted by the mathematical models. The mathematical models established in this study are helpful to optimize clinical protocol.
Intact interval timing in circadian CLOCK mutants.
Cordes, Sara; Gallistel, C R
2008-08-28
While progress has been made in determining the molecular basis for the circadian clock, the mechanism by which mammalian brains time intervals measured in seconds to minutes remains a mystery. An obvious question is whether the interval-timing mechanism shares molecular machinery with the circadian timing mechanism. In the current study, we trained circadian CLOCK +/- and -/- mutant male mice in a peak-interval procedure with 10 and 20-s criteria. The mutant mice were more active than their wild-type littermates, but there were no reliable deficits in the accuracy or precision of their timing as compared with wild-type littermates. This suggests that expression of the CLOCK protein is not necessary for normal interval timing.
Effectiveness of breast cancer screening policies in countries with medium-low incidence rates.
Kong, Qingxia; Mondschein, Susana; Pereira, Ana
2018-02-05
Chile has lower breast cancer incidence rates compared to those in developed countries. Our public health system aims to perform 10 biennial screening mammograms in the age group of 50 to 69 years by 2020. Using a dynamic programming model, we have found the optimal ages to perform 10 screening mammograms that lead to the lowest lifetime death rate and we have evaluated a set of fixed inter-screening interval policies. The optimal ages for the 10 mammograms are 43, 47, 51, 54, 57, 61, 65, 68, 72, and 76 years, and the most effective fixed inter-screening is every four years after the 40 years. Both policies respectively reduce lifetime death rate in 6.4% and 5.7% and the cost of saving one life in 17% and 9.3% compared to the 2020 Chilean policy. Our findings show that two-year inter-screening interval policies are less effective in countries with lower breast cancer incidence; thus we recommend screening policies with a wider age range and larger inter-screening intervals for Chile.