Rangabhashiyam, S; Nandagopal, M S Giri; Nakkeeran, E; Selvaraju, N
2016-07-01
Packed bed column studies were carried out to evaluate the performance of chemically modified adsorbents for the sequestration of hexavalent chromium from synthetic and electroplating industrial effluent. The effects of parameters such as bed height (3-9 cm), inlet flow rate (5-15 mL/min), and influent Cr(VI) concentration (50-200 mg/L) on the percentage removal of Cr(VI) and the adsorption capacity of the adsorbents in a packed bed column were investigated. The breakthrough time increased with increasing bed height and decreased with the increase of inlet flow rate and influent Cr(VI) concentration. The adsorption column models such as Thomas, Adams-Bohart, Yoon-Nelson, and bed depth service time (BDST) were successfully correlated with the experimental data. The Yoon-Nelson and BDST model showed good agreement with the experimental data for all the studied parameter conditions. Results of the present study indicated that the chemically modified Swietenia mahagoni shell can be used as an adsorbent for the removal of Cr(VI) from industrial wastewater in a packed bed column.
Huiberts, Laura M; Smolders, Karin C H J; de Kort, Yvonne A W
2016-10-01
This study investigated diurnal non-image forming (NIF) effects of illuminance level on physiological arousal in parallel to NIF effects on vigilance and working memory performance. We employed a counterbalanced within-subjects design in which thirty-nine participants (mean age=21.2; SD=2.1; 11 male) completed three 90-min sessions (165 vs. 600lx vs. 1700lx at eye level) either in the morning (N=18) or afternoon (N=21). During each session, participants completed four measurement blocks (incl. one baseline block) each consisting of a 10-min Psychomotor Vigilance Task (PVT) and a Backwards Digit-Span Task (BDST) including easy trials (4-6 digits) and difficult trials (7-8 digits). Heart rate (HR), skin conductance level (SCL) and systolic blood pressure (SBP) were measured continuously. The results revealed significant improvements in performance on the BDST difficult trials under 1700lx vs. 165lx (p=0.01), while illuminance level did not affect performance on the PVT and BDST easy trials. Illuminance level impacted HR and SCL, but not SBP. In the afternoon sessions, HR was significantly higher under 1700lx vs. 165lx during PVT performance (p=0.05), while during BDST performance, HR was only slightly higher under 600 vs. 165lx (p=0.06). SCL was significantly higher under 1700lx vs. 165lx during performance on BDST easy trials (p=0.02) and showed similar, but nonsignificant trends during the PVT and BDST difficult trials. Although both physiology and performance were affected by illuminance level, no consistent pattern emerged with respect to parallel changes in physiology and performance. Rather, physiology and performance seemed to be affected independently, via unique pathways. Copyright © 2016. Published by Elsevier Inc.
Uzdevenes, Chad G; Gao, Chi; Sandhu, Amandeep K; Yagiz, Yavuz; Gu, Liwei
2018-03-24
Muscadine grape pomace, a by-product of juicing and wine-making, contains significant amounts of anthocyanin 3,5-diglucosides, known to be beneficial to human health. The objective of this research was to use mathematical modeling to investigate the adsorption/desorption characteristics of these anthocyanins from muscadine grape pomace on Amberlite FPX66 resin in a fixed bed column. Anthocyanins were extracted using hot water and ultrasound, and the extracts were loaded onto a resin column at five bed depths (5, 6, 8, 10 and 12 cm) using three flow rates (4, 6 and 8 mL min -1 ). It was found that adsorption on the column fitted the bed depth service time (BDST) model and the empty bed residence time (EBRT) model. Desorption was achieved by eluting the column using ethanol at four concentrations (25, 40, 55 and 70% v/v) and could be described with an empirical sigmoid model. The breakthrough curves of anthocyanins fitted the BDST model for all three flow rates with R 2 values of 0.983, 0.992 and 0.984 respectively. The EBRT model was successfully employed to find the operating lines, which allow for column scale-up while still achieving similar results to those found in a laboratory operation. Desorption with 40% (v/v) ethanol achieved the highest recovery rate of anthocyanins at 79.6%. The mathematical models established in this study can be used in designing a pilot/industrial- scale column for the separation and concentration of anthocyanins from muscadine juice pomace. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.
Chen, Dan; Zhou, Jun; Wang, Hongyu; Yang, Kai
2018-01-01
There is an increasing need to explore effective and clean approaches for hazardous contamination removal from wastewaters. In this work, a novel bead adsorbent, polyvinyl alcohol-graphene oxide (PVA-GO) macroporous hydrogel bead was prepared as filter media for p-nitrophenol (PNP), dye methylene blue (MB), and heavy metal U(VI) removal from aqueous solution. Batch and fixed-bed column experiments were carried out to evaluate the adsorption capacities of PNP, MB, and U(VI) on this bead. From batch experiments, the maximum adsorption capacities of PNP, MB, and U(VI) reached 347.87, 422.90, and 327.55 mg/g. From the fixed-bed column experiments, the adsorption capacities of PNP, MB, and U(VI) decreased with initial concentration increasing from 100 to 400 mg/L. The adsorption capacities of PNP, MB, and U(VI) decreased with increasing flow rate. Also, the maximum adsorption capacity of PNP decreased as pH increased from 3 to 9, while MB and U(VI) presented opposite tendencies. Furthermore, the bed depth service Time (BDST) model showed good linear relationships for the three ions' adsorption processes in this fixed-bed column, which indicated that the BDST model effectively evaluated and optimized the adsorption process of PVA-GO macroporous hydrogel bead in fixed-bed columns for hazardous contaminant removal from wastewaters.
Column study of chromium(VI) adsorption from electroplating industry by coconut coir pith.
Suksabye, Parinda; Thiravetyan, Paitip; Nakbanpote, Woranan
2008-12-15
The removal of Cr(VI) from electroplating wastewater by coir pith was investigated in a fixed-bed column. The experiments were conducted to study the effect of important parameters such as bed depth (40-60cm) and flow rate (10-30ml min(-1)). At 0.05 C(t)/C(0), the breakthrough volume increased as flow rate decreased or a bed depth increased due to an increase in empty bed contact time (EBCT). The bed depth service time model (BDST) fit well with the experimental data in the initial region of the breakthrough curve, while the simulation of the whole curve using non-linear regression analysis was effective using the Thomas model. The adsorption capacity estimated from the BDST model was reduced with increasing flow rate, which was 16.40mg cm(-3) or 137.91mg Cr(VI)g(-1) coir pith for the flow rates of 10ml min(-1) and 14.05mg cm(-3) or 118.20mg Cr(VI)g(-1) coir pith for the flow rates of 30ml min(-1). At the highest bed depth (60cm) and the lowest flow rate (10mlmin(-1)), the maximum adsorption reached 201.47mg Cr(VI)g(-1) adsorbent according to the Thomas model. The column was regenerated by eluting chromium using 2M HNO(3) after adsorption studies. The desorption of Cr(III) in each of three cycles was about 67-70%. The desorption of Cr(III) in each cycle did not reach 100% due to the fact that Cr(V) was present through the reduction of Cr(VI), and was still in coir pith, possibly bound to glucose in the cellulose part of coir pith. Therefore, the Cr(V) complex cannot be desorbed in solution. The evidence of Cr(V) signal was observed in coir pith, alpha-cellulose and holocellulose extracted from coir pith using electron spin resonance (ESR).
Han, Xiuli; Wang, Wei; Ma, Xiaojian
2011-01-01
The adsorption potential of lotus leaf to remove methylene blue (MB) from aqueous solution was investigated in batch and fixed-bed column experiments. Langmuir, Freundlich, Temkin and Koble-Corrigan isotherm models were employed to discuss the adsorption behavior. The results of analysis indicated that the equilibrium data were perfectly represented by Temkin isotherm and the Langmuir saturation adsorption capacity of lotus leaf was found to be 239.6 mg g(-1) at 303 K. In fixed-bed column experiments, the effects of flow rate, influent concentration and bed height on the breakthrough characteristics of adsorption were discussed. The Thomas and the bed-depth/service time (BDST) models were applied to the column experimental data to determine the characteristic parameters of the column adsorption. The two models were found to be suitable to describe the dynamic behavior of MB adsorbed onto the lotus leaf powder column.
García-Sánchez, J J; Solache-Ríos, M; Martínez-Miranda, V; Solís Morelos, C
2013-10-01
The purpose of this work was to evaluate the potential of aluminum modified iron oxides, in a continuous flow for removal of fluoride ions from aqueous solutions and drinking water. The breakthrough curves obtained for fluoride ions adsorption from aqueous solutions and drinking water were fitted to Thomas, Bohart-Adams, and bed depth service time model (BDST). Adsorption capacities at the breakthroughs, Thomas model constant, kinetic constant and the saturation concentration were determined. The results show that in general, the adsorption efficiency decreases as the bed depth increases, and this behavior shows that the adsorption is controlled by the mass transport resistance. The adsorption capacity for fluoride ions by CP-Al is higher for fluoride aqueous solutions than drinking water. Copyright © 2013 Elsevier Inc. All rights reserved.
Microcolumn studies of dye adsorption onto manganese oxides modified diatomite.
Al-Ghouti, M A; Khraisheh, M A M; Ahmad, M N; Allen, S J
2007-07-19
The method described here cannot fully replace the analysis of large columns by small test columns (microcolumns). The procedure, however, is suitable for speeding up the determination of adsorption parameters of dye onto the adsorbent and for speeding up the initial screening of a large adsorbent collection that can be tedious if a several adsorbents and adsorption conditions must be tested. The performance of methylene blue (MB), a basic dye, Cibacron reactive black (RB) and Cibacron reactive yellow (RY) was predicted in this way and the influence of initial dye concentration and other adsorption conditions on the adsorption behaviour were demonstrated. On the basis of the experimental results, it can be concluded that the adsorption of RY onto manganese oxides modified diatomite (MOMD) exhibited a characteristic "S" shape and can be simulated effectively by the Thomas model. It is shown that the adsorption capacity increased as the initial dye concentration increased. The increase in the dye uptake capacity with the increase of the adsorbent mass in the column was due to the increase in the surface area of adsorbent, which provided more binding sites for the adsorption. It is shown that the use of high flow rates reduced the time that RY in the solution is in contact with the MOMD, thus allowing less time for adsorption to occur, leading to an early breakthrough of RY. A rapid decrease in the column adsorption capacity with an increase in particle size with an average 56% reduction in capacity resulting from an increase in the particle size from 106-250 microm to 250-500 microm. The experimental data correlated well with calculated data using the Thomas equation and the bed depth-service time (BDST) equation. Therefore, it might be concluded that the Thomas equation and the BDST equations can produce accurate predication for variation of dye concentration, mass of the adsorbent, flow rate and particle size. In general, the values of adsorption isotherm capacity obtained in a batch system show the maximum values and are considerably higher than those obtained in a fixed-bed.
Shanmugaprakash, M; Sivakumar, V
2015-12-01
The present work, analyzes the potential of defatted pongamia oil cake (DPOC) for the biosorption of Zn(II) ions from aqueous solutions in the both batch and column mode. Batch experiments were conducted to evaluate the optimal pH, effect of adsorbent dosage, initial Zn(II) ions concentration and contact time. The biosorption equilibrium and kinetics data for Zn(II) ions onto the DPOC were studied in detail, using several models, among all it was found to be that, Freundlich and the second-order model explained the equilibrium data well. The calculated thermodynamic parameters had shown that the biosorption of Zn(II) ions was exothermic and spontaneous in nature. Batch desorption studies showed that the maximum Zn(II) recovery occurred, using 0.1 M EDTA. The Bed Depth Service Time (BDST) and the Thomas model was successfully employed to evaluate the model parameters in the column mode. The results indicated that the DPOC can be applied as an effective and eco-friendly biosorbent for the removal of Zn(II) ions in polluted wastewater. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wu, Fei; Zhang, Kai-Qiang; Bai, Bo; Wang, Hong-Lun; Suo, You-Rui
2015-02-01
In this work, the adsorption potential of TiO2@ yeast composite microspheres to remove Fluorescent Whitening Agent-VBL (FWA-VBL) from aqueous solution was investigated using fixed-bed adsorption column. The effects of pH(2.0-8.0), bed height (1-3 cm), inlet concentration (20-80 mg x L(-1)) and feed flow rate (5-11 mL x min(-1)) on the breakthrough characteristics of the adsorption system were determined. The results showed that the highest bed capacity of 223.80 mg x g(-1) was obtained under the condition of pH 2.0, 80 mg x L(-1) inlet dye concentration, 1.0 cm bed height and 5 mL x min(-1) flow rate. The adsorption data were fitted to three well-established fixed-bed adsorption models, namely, BDST model, Thomas model and Yoon-Nelson model. The results fitted well to the three models with coefficients of correlation R2 > 0.980 in different conditions. The TiO2 @ yeast composite microspheres have desired regeneration ability and could be reused for four times.
NASA Astrophysics Data System (ADS)
Koner, S.; Adak, A.
2012-09-01
The fixed bed column study was conducted for the removal of 2,4-dichlorophenoxyacetic acid (2,4-D), a widely used herbicide from synthetically prepared wastewater using surfactant modified silica gel waste (SMSGW) as an adsorbing media. The adsorbing media was prepared by treating silica gel waste (SGW) with cationic surfactant. The removal was due to adsolubilization of 2,4-D molecules within the admicelles formed on the surface of SGW. The column having 2.5 cm diameter, with different bed heights such as 20, 30 and 40 cm were used in the study. The different column design parameters like depth of exchange zone, time required for exchange zone to move its own height, adsorption rate constant, adsorption capacity constant were calculated using BDST model. The SMSGW was found to be a very efficient media for the removal of 2,4-D from wastewater. Column design parameters were modeled for different field conditions to predict the duration of column run for practical application.
Executive Function: Comparing Bilingual and Monolingual Iranian University Students
ERIC Educational Resources Information Center
Kazemeini, Toktam; Fadardi, Javad Salehi
2016-01-01
The study aimed to examine whether Kurdish-Persian early Bilingual university students (EBL) and Persian Monolingual university students (ML) differ on tasks of executive function (EF). Thirty male EBL and 30 male ML students from Ferdowsi University of Mashhad completed a Persian Stroop Color-Word task (SCWT), Backward Digit Span Test (BDST),…
Saha, Papita Das; Chakraborty, Sagnik; Chowdhury, Shamik
2012-04-01
In this study, batch and fixed-bed column experiments were performed to investigate the biosorption potential of Artocarpus heterophyllus (jackfruit) leaf powder (JLP) to remove crystal violet (CV) from aqueous solutions. Batch biosorption studies were carried out as a function of solution pH, contact time, initial dye concentration and temperature. The biosorption equilibrium data showed excellent fit to the Langmuir isotherm model with maximum monolayer biosorption capacity of 43.39 mg g(-1) at pH 7.0, initial dye concentration=50 mg L(-1), temperature=293 K and contact time=120 min. According to Dubinin-Radushkevich (D-R) isotherm model, biosorption of CV by JLP was chemisorption. The biosorption kinetics followed the pseudo-second-order kinetic model. Thermodynamic analysis revealed that biosorption of CV from aqueous solution by JLP was a spontaneous and exothermic process. In order to ascertain the practical applicability of the biosorbent, fixed-bed column studies were also performed. The breakthrough time increased with increasing bed height and decreased with increasing flow rate. The Thomas model as well as the BDST model showed good agreement with the experimental results at all the process parameters studied. It can be concluded that JLP is a promising biosorbent for removal of CV from aqueous solutions. Copyright © 2011 Elsevier B.V. All rights reserved.
Unuabonah, Emmanuel I; Olu-Owolabi, Bamidele I; Fasuyi, Esther I; Adebowale, Kayode O
2010-07-15
Kaolinite clay was treated with polyvinyl alcohol to produce a novel water-stable composite called polymer-clay composite adsorbent. The modified adsorbent was found to have a maximum adsorption capacity of 20,400+/-13 mg/L (1236 mg/g) and a maximum adsorption rate constant of approximately = 7.45x10(-3)+/-0.0002 L/(min mg) at 50% breakthrough. Increase in bed height increased both the breakpoint and exhaustion point of the polymer-clay composite adsorbent. The time for the movement of the Mass Transfer Zone (delta) down the column was found to increase with increasing bed height. The presence of preadsorbed electrolyte and regeneration were found to reduce this time. Increased initial Cd(2+) concentration, presence of preadsorbed electrolyte, and regeneration of polymer-clay composite adsorbent reduced the volume of effluent treated. Premodification of polymer-clay composite adsorbent with Ca- and Na-electrolytes reduced the rate of adsorption of Cd(2+) onto polymer-clay composite and lowered the breakthrough time of the adsorbent. Regeneration and re-adsorption studies on the polymer-clay composite adsorbent presented a decrease in the bed volume treated at both the breakpoint and exhaustion points of the regenerated bed. Experimental data were observed to show stronger fits to the Bed Depth Service Time (BDST) model than the Thomas model. 2010 Elsevier B.V. All rights reserved.
Nethaji, S; Sivasamy, A; Kumar, R Vimal; Mandal, A B
2013-06-01
Char was obtained from lotus seed biomass by a simple single-step acid treatment process. It was used as an adsorbent for the removal of malachite green dye (MG) from simulated dye bath effluent. The adsorbent was characterized for its surface morphology, surface functionalities, and zero point charge. Batch studies were carried out by varying the parameters such as initial aqueous pH, adsorbent dosage, adsorbent particle size, and initial adsorbate concentration. Langmuir and Freundlich isotherms were used to test the isotherm data and the Freundlich isotherm best fitted the data. Thermodynamic studies were carried out and the thermodynamic parameters such as ∆G, ∆H, and ∆S were evaluated. Adsorption kinetics was carried out and the data were tested with pseudofirst-order model, pseudosecond-order model, and intraparticle diffusion model. Adsorption of MG was not solely by intraparticle diffusion but film diffusion also played a major role. Continuous column experiments were also conducted using microcolumn and the spent adsorbent was regenerated using ethanol and was repeatedly used for three cycles in the column to determine the reusability of the regenerated adsorbent. The column data were modeled with the modeling equations such as Adam-Bohart model, Bed Depth Service Time (BDST) model, and Yoon-Nelson model for all the three cycles.
Adsorption of Methyl Tertiary Butyl Ether on Granular Zeolites: Batch and Column Studies
Abu-Lail, Laila; Bergendahl, John A.; Thompson, Robert W.
2010-01-01
Methyl tertiary butyl ether (MTBE) has been shown to be readily removed from water with powdered zeolites, but the passage of water through fixed beds of very small powdered zeolites produces high friction losses not encountered in flow through larger sized granular materials. In this study, equilibrium and kinetic adsorption of MTBE onto granular zeolites, a coconut shell granular activated carbon (CS-1240), and a commercial carbon adsorbent (CCA) sample was evaluated. In addition, the effect of natural organic matter (NOM) on MTBE adsorption was evaluated. Batch adsorption experiments determined that ZSM-5 was the most effective granular zeolite for MTBE adsorption. Further equilibrium and kinetic experiments verified that granular ZSM-5 is superior to CS-1240 and CCA in removing MTBE from water. No competitive-adsorption effects between NOM and MTBE were observed for adsorption to granular ZSM-5 or CS-1240, however there was competition between NOM and MTBE for adsorption onto the CCA granules. Fixed-bed adsorption experiments for longer run times were performed using granular ZSM-5. The bed depth service time model (BDST) was used to analyze the breakthrough data. PMID:20153106
Pesticide (acephate) removal by GAC: a case study.
Banerjee, G; Kumar, B
2002-04-01
Pesticides are persistent pollutants which need utmost attention in agricultural pollution. They usually accumulate in the food chain, and hence are hazardous in nature. The present study reports the performance of granular activated carbon (GAC) in the removal of acephate contained in the effluent of a nearby pesticide manufacturing industry. In the batch study, the optimum dose of GAC was found to be 85 gm/litre for almost 100% removal of acephate from its initial concentration of 2.9 mg/litre which was found in the industrial effluent under treatment. The adsorption kinetics were represented closely by Langmuir isotherm. The equilibrium time was found as 80 minutes. The adsorptive capacity of GAC for acephate (pesticide) was of the order of 0.04614 mg/gm. A column system was devised and designed based on bed depth-service time (BDST) approach with the experimental value of 'a' and 'b' as 6.125 and 47.75 respectively.
Jafari, Maryam; Rahimi, Mahmood Reza; Ghaedi, Mehrorang; Javadian, Hamedreza; Asfaram, Arash
2017-12-01
A continuous adsorption was used for removal of azure II (AZ II) and auramine O (AO) from aqueous solutions using Pinus eldarica stalks activated carbon (PES-AC) from aqueous solutions. The effects of initial dye concentration, flow rate, bed height and contact time on removal percentage of AO and AZ II were evaluated and optimized by central composite design (CCD) at optimum pH = 7.0. ZnO nanoparticles loaded on activated carbon were also used to remove AO and AZ II at pH = 7.0 and other optimum conditions. The breakthrough curves were obtained at different flow rates, initial dye concentrations and bed heights and the experimental data were fitted by Thomas, Adams-Bohart and Yoon-Nelson models. The main parameters of fixed-bed column including its adsorption capacity at breakthrough point (q b ), adsorption capacity at saturation point (q s ), mass transfer zone (MTZ), total removal percentage (R%), and empty bed contact time (EBCT) were calculated. The removal percentages calculated for AZ II and AO II were in the range of 51.6-61.1% and 40.6-61.6%, respectively. Bed adsorption capacity (N 0 ) and critical bed depth (Z 0 ) were obtained by BDST model. Copyright © 2017 Elsevier Inc. All rights reserved.
Bertoni, Fernando A; Medeot, Anabela C; González, Juan C; Sala, Luis F; Bellú, Sebastián E
2015-05-15
Spongomorpha pacifica biomass was evaluated as a new sorbent for Mo(VI) removal from aqueous solution. The maximum sorption capacity was found to be 1.28×10(6)±1×10(4) mg kg(-1) at 20°C and pH 2.0. Sorption kinetics and equilibrium studies followed pseudo-first order and Langmuir adsorption isotherm models, respectively. FTIR analysis revealed that carboxyl and hydroxyl groups were mainly responsible for the sorption of Mo(VI). SEM images show that morphological changes occur at the biomass surface after Mo(VI) sorption. Activation parameters and mean free energies obtained with Dubinin-Radushkevich isotherm model demonstrate that the mechanism of sorption process was chemical sorption. Thermodynamic parameters demonstrate that the sorption process was spontaneous, endothermic and the driven force was entropic. The isosteric heat of sorption decreases with surface loading, indicating that S. pacifica has an energetically non-homogeneous surface. Experimental breakthrough curves were simulated by Thomas and modified dose-response models. The bed depth service time (BDST) model was employed to scale-up the continuous sorption experiments. The critical bed depth, Z0 was determined to be 1.7 cm. S.pacifica biomass showed to be a good sorbent for Mo(VI) and it can be used in continuous treatment of effluent polluted with molybdate ions. Copyright © 2015 Elsevier Inc. All rights reserved.
Mishra, Ashutosh; Tripathi, Brahma Dutt; Rai, Ashwani Kumar
2016-10-01
The present study represents the first attempt to investigate the biosorption potential of Fenton modified Hydrilla verticillata dried biomass (FMB) in removing chromium(VI) and nickel(II) ions from wastewater using up-flow packed-bed column reactor. Effects of different packed-bed column parameters such as bed height, flow rate, influent metal ion concentration and particle size were examined. The outcome of the column experiments illustrated that highest bed height (25cm); lowest flow rate (10mLmin(-1)), lowest influent metal concentration (5mgL(-1)) and smallest particle size range (0.25-0.50mm) are favourable for biosorption. The maximum biosorption capacity of FMB for chromium(VI) and nickel(II) removal were estimated to be 89.32 and 87.18mgg(-1) respectively. The breakthrough curves were analyzed using Bed Depth Service Time (BDST) and Thomas models. The experimental results obtained agree to both the models. Column regeneration experiments were also carried out using 0.1M HNO3. Results revealed good reusability of FMB during ten cycles of sorption and desorption. Performance of FMB-packed column in treating secondary effluent was also tested under identical experimental conditions. Results demonstrated significant reduction in chromium(VI) and nickel(II) ions concentration after the biosorption process. Copyright © 2016 Elsevier Inc. All rights reserved.
Fixed-bed operation for manganese removal from water using chitosan/bentonite/MnO composite beads.
Muliwa, Anthony M; Leswifi, Taile Y; Maity, Arjun; Ochieng, Aoyi; Onyango, Maurice S
2018-04-24
In the present study, a new composite adsorbent, chitosan/bentonite/manganese oxide (CBMnO) beads, cross-linked with tetraethyl-ortho-silicate (TEOS) was applied in a fixed-bed column for the removal of Mn (II) from water. The adsorbent was characterised by scanning electron microscopy (SEM), Fourier transform infra-red (FT-IR), N 2 adsorption-desorption and X-ray photoelectron spectroscopy (XPS) techniques, and moreover the point of zero charge (pH pzc ) was determined. The extend of Mn (II) breakthrough behaviour was investigated by varying bed mass, flow rate and influent concentration, and by using real environmental water samples. The dynamics of the column showed great dependency of breakthrough curves on the process conditions. The breakthrough time (t b ), bed exhaustion time (t s ), bed capacity (q e ) and the overall bed efficiency (R%) increased with an increase in bed mass, but decreased with the increase in both influent flow rate and concentration. Non-linear regression suggested that the Thomas model effectively described the breakthrough curves while large-scale column performance could be estimated by the bed depth service time (BDST) model. Experiments with environmental water revealed that coexisting ions had little impact on Mn (II) removal, and it was possible to achieve 6.0 mg/g breakthrough capacity (q b ), 4.0 L total treated water and 651 bed volumes processed with an initial concentration of 38.5 mg/L and 5.0 g bed mass. The exhausted bed could be regenerated with 0.001 M nitric acid solution within 1 h, and the sorbent could be reused twice without any significant loss of capacity. The findings advocate that CBMnO composite beads can provide an efficient scavenging pathway for Mn (II) in polluted water.
Aziz, Abdul Shukor Abdul; Manaf, Latifah Abd; Man, Hasfalina Che; Kumar, Nadavala Siva
2014-01-01
This paper investigates the adsorption characteristics of palm oil boiler mill fly ash (POFA) derived from an agricultural waste material in removing Cd(II) and Cu(II) from aqueous solution via column studies. The performance of the study is described through the breakthrough curves concept under relevant operating conditions such as column bed depths (1, 1.5, and 2 cm) and influent metal concentrations (5, 10, and 20 mg/L). The Cd(II) and Cu(II) uptake mechanism is particularly bed depth- and concentration-dependant, favoring higher bed depth and lower influent metal concentration. The highest bed capacity of 34.91 mg Cd(II)/g and 21.93 mg Cu(II)/g of POFA was achieved at 20 mg/L of influent metal concentrations, column bed depth of 2 cm, and flow rate of 5 mL/min. The whole breakthrough curve simulation for both metal ions were best described using the Thomas and Yoon–Nelson models, but it is apparent that the initial region of the breakthrough for Cd(II) was better described using the BDST model. The results illustrate that POFA could be utilized effectively for the removal of Cd(II) and Cu(II) ions from aqueous solution in a fixed-bed column system.
Chan, Michelle W C; Yip, James T H; Lee, Tatia M C
2004-01-01
The purpose of the present study is to investigate whether patients with different subtypes of schizophrenia are differentially impaired on measures of attention. Forty-eight patients with schizophrenia (19 paranoid and 29 nonparanoid) and 48 healthy controls (matched on chronological age, sex, and years of education) were administered five measures of attention including the Stroop Color-Word Test (SCWT; Stroop, 1935), the Digit Vigilance Test (DVT; Lewis, 1992), the Symbol Digit Modalities Test (SDMT; Smith, 1982), the Backward Digit Span Test (BDST; Wechsler, 1987), and the Color Trails Test (CTT; D'Elia et al., 1996) to assess selective attention, sustained attention, switching attention, and attentional control processing by the latter two tests respectively. Results from the present study showed that patients with schizophrenia performed poorer on the SCWT, the DVT, and the SDMT, relative to their healthy counterparts. Furthermore, patients with different subtypes of schizophrenia also had different degrees of attentional impairment. While patients with paranoid schizophrenia performed worse on the SCWT, those with nonparanoid schizophrenia performed worse on the SDMT. Nevertheless, these findings may suggest that patients with paranoid and nonparanoid schizophrenia may have different profiles with respect to their performances on measures of attention.
Liao, Bolin; Zhang, Yunong; Jin, Long
2016-02-01
In this paper, a new Taylor-type numerical differentiation formula is first presented to discretize the continuous-time Zhang neural network (ZNN), and obtain higher computational accuracy. Based on the Taylor-type formula, two Taylor-type discrete-time ZNN models (termed Taylor-type discrete-time ZNNK and Taylor-type discrete-time ZNNU models) are then proposed and discussed to perform online dynamic equality-constrained quadratic programming. For comparison, Euler-type discrete-time ZNN models (called Euler-type discrete-time ZNNK and Euler-type discrete-time ZNNU models) and Newton iteration, with interesting links being found, are also presented. It is proved herein that the steady-state residual errors of the proposed Taylor-type discrete-time ZNN models, Euler-type discrete-time ZNN models, and Newton iteration have the patterns of O(h(3)), O(h(2)), and O(h), respectively, with h denoting the sampling gap. Numerical experiments, including the application examples, are carried out, of which the results further substantiate the theoretical findings and the efficacy of Taylor-type discrete-time ZNN models. Finally, the comparisons with Taylor-type discrete-time derivative model and other Lagrange-type discrete-time ZNN models for dynamic equality-constrained quadratic programming substantiate the superiority of the proposed Taylor-type discrete-time ZNN models once again.
Consistency of internal fluxes in a hydrological model running at multiple time steps
NASA Astrophysics Data System (ADS)
Ficchi, Andrea; Perrin, Charles; Andréassian, Vazken
2016-04-01
Improving hydrological models remains a difficult task and many ways can be explored, among which one can find the improvement of spatial representation, the search for more robust parametrization, the better formulation of some processes or the modification of model structures by trial-and-error procedure. Several past works indicate that model parameters and structure can be dependent on the modelling time step, and there is thus some rationale in investigating how a model behaves across various modelling time steps, to find solutions for improvements. Here we analyse the impact of data time step on the consistency of the internal fluxes of a rainfall-runoff model run at various time steps, by using a large data set of 240 catchments. To this end, fine time step hydro-climatic information at sub-hourly resolution is used as input of a parsimonious rainfall-runoff model (GR) that is run at eight different model time steps (from 6 minutes to one day). The initial structure of the tested model (i.e. the baseline) corresponds to the daily model GR4J (Perrin et al., 2003), adapted to be run at variable sub-daily time steps. The modelled fluxes considered are interception, actual evapotranspiration and intercatchment groundwater flows. Observations of these fluxes are not available, but the comparison of modelled fluxes at multiple time steps gives additional information for model identification. The joint analysis of flow simulation performance and consistency of internal fluxes at different time steps provides guidance to the identification of the model components that should be improved. Our analysis indicates that the baseline model structure is to be modified at sub-daily time steps to warrant the consistency and realism of the modelled fluxes. For the baseline model improvement, particular attention is devoted to the interception model component, whose output flux showed the strongest sensitivity to modelling time step. The dependency of the optimal model complexity on time step is also analysed. References: Perrin, C., Michel, C., Andréassian, V., 2003. Improvement of a parsimonious model for streamflow simulation. Journal of Hydrology, 279(1-4): 275-289. DOI:10.1016/S0022-1694(03)00225-7
Time-dependent oral absorption models
NASA Technical Reports Server (NTRS)
Higaki, K.; Yamashita, S.; Amidon, G. L.
2001-01-01
The plasma concentration-time profiles following oral administration of drugs are often irregular and cannot be interpreted easily with conventional models based on first- or zero-order absorption kinetics and lag time. Six new models were developed using a time-dependent absorption rate coefficient, ka(t), wherein the time dependency was varied to account for the dynamic processes such as changes in fluid absorption or secretion, in absorption surface area, and in motility with time, in the gastrointestinal tract. In the present study, the plasma concentration profiles of propranolol obtained in human subjects following oral dosing were analyzed using the newly derived models based on mass balance and compared with the conventional models. Nonlinear regression analysis indicated that the conventional compartment model including lag time (CLAG model) could not predict the rapid initial increase in plasma concentration after dosing and the predicted Cmax values were much lower than that observed. On the other hand, all models with the time-dependent absorption rate coefficient, ka(t), were superior to the CLAG model in predicting plasma concentration profiles. Based on Akaike's Information Criterion (AIC), the fluid absorption model without lag time (FA model) exhibited the best overall fit to the data. The two-phase model including lag time, TPLAG model was also found to be a good model judging from the values of sum of squares. This model also described the irregular profiles of plasma concentration with time and frequently predicted Cmax values satisfactorily. A comparison of the absorption rate profiles also suggested that the TPLAG model is better at prediction of irregular absorption kinetics than the FA model. In conclusion, the incorporation of a time-dependent absorption rate coefficient ka(t) allows the prediction of nonlinear absorption characteristics in a more reliable manner.
Forecasting Hourly Water Demands With Seasonal Autoregressive Models for Real-Time Application
NASA Astrophysics Data System (ADS)
Chen, Jinduan; Boccelli, Dominic L.
2018-02-01
Consumer water demands are not typically measured at temporal or spatial scales adequate to support real-time decision making, and recent approaches for estimating unobserved demands using observed hydraulic measurements are generally not capable of forecasting demands and uncertainty information. While time series modeling has shown promise for representing total system demands, these models have generally not been evaluated at spatial scales appropriate for representative real-time modeling. This study investigates the use of a double-seasonal time series model to capture daily and weekly autocorrelations to both total system demands and regional aggregated demands at a scale that would capture demand variability across a distribution system. Emphasis was placed on the ability to forecast demands and quantify uncertainties with results compared to traditional time series pattern-based demand models as well as nonseasonal and single-seasonal time series models. Additional research included the implementation of an adaptive-parameter estimation scheme to update the time series model when unobserved changes occurred in the system. For two case studies, results showed that (1) for the smaller-scale aggregated water demands, the log-transformed time series model resulted in improved forecasts, (2) the double-seasonal model outperformed other models in terms of forecasting errors, and (3) the adaptive adjustment of parameters during forecasting improved the accuracy of the generated prediction intervals. These results illustrate the capabilities of time series modeling to forecast both water demands and uncertainty estimates at spatial scales commensurate for real-time modeling applications and provide a foundation for developing a real-time integrated demand-hydraulic model.
Piecewise multivariate modelling of sequential metabolic profiling data.
Rantalainen, Mattias; Cloarec, Olivier; Ebbels, Timothy M D; Lundstedt, Torbjörn; Nicholson, Jeremy K; Holmes, Elaine; Trygg, Johan
2008-02-19
Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS) models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA) for modelling and analysis of short time-series data.
Multiple Indicator Stationary Time Series Models.
ERIC Educational Resources Information Center
Sivo, Stephen A.
2001-01-01
Discusses the propriety and practical advantages of specifying multivariate time series models in the context of structural equation modeling for time series and longitudinal panel data. For time series data, the multiple indicator model specification improves on classical time series analysis. For panel data, the multiple indicator model…
Competing risks models and time-dependent covariates
Barnett, Adrian; Graves, Nick
2008-01-01
New statistical models for analysing survival data in an intensive care unit context have recently been developed. Two models that offer significant advantages over standard survival analyses are competing risks models and multistate models. Wolkewitz and colleagues used a competing risks model to examine survival times for nosocomial pneumonia and mortality. Their model was able to incorporate time-dependent covariates and so examine how risk factors that changed with time affected the chances of infection or death. We briefly explain how an alternative modelling technique (using logistic regression) can more fully exploit time-dependent covariates for this type of data. PMID:18423067
NASA Astrophysics Data System (ADS)
Lute, A. C.; Luce, Charles H.
2017-11-01
The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.
Modeling Nonstationarity in Space and Time
2017-01-01
Summary We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. PMID:28134977
Modeling nonstationarity in space and time.
Shand, Lyndsay; Li, Bo
2017-09-01
We propose to model a spatio-temporal random field that has nonstationary covariance structure in both space and time domains by applying the concept of the dimension expansion method in Bornn et al. (2012). Simulations are conducted for both separable and nonseparable space-time covariance models, and the model is also illustrated with a streamflow dataset. Both simulation and data analyses show that modeling nonstationarity in both space and time can improve the predictive performance over stationary covariance models or models that are nonstationary in space but stationary in time. © 2017, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Okutani, Iwao; Mitsui, Tatsuro; Nakada, Yusuke
In this paper put forward are neuron-type models, i.e., neural network model, wavelet neuron model and three layered wavelet neuron model(WV3), for estimating traveling time between signalized intersections in order to facilitate adaptive setting of traffic signal parameters such as green time and offset. Model validation tests using simulated data reveal that compared to other models, WV3 model works very fast in learning process and can produce more accurate estimates of travel time. Also, it is exhibited that up-link information obtainable from optical beacons, i.e., travel time observed during the former cycle time in this case, makes a crucial input variable to the models in that there isn't any substantial difference between the change of estimated and simulated travel time with the change of green time or offset when up-link information is employed as input while there appears big discrepancy between them when not employed.
Predicting long-term catchment nutrient export: the use of nonlinear time series models
NASA Astrophysics Data System (ADS)
Valent, Peter; Howden, Nicholas J. K.; Szolgay, Jan; Komornikova, Magda
2010-05-01
After the Second World War the nitrate concentrations in European water bodies changed significantly as the result of increased nitrogen fertilizer use and changes in land use. However, in the last decades, as a consequence of the implementation of nitrate-reducing measures in Europe, the nitrate concentrations in water bodies slowly decrease. This causes that the mean and variance of the observed time series also changes with time (nonstationarity and heteroscedascity). In order to detect changes and properly describe the behaviour of such time series by time series analysis, linear models (such as autoregressive (AR), moving average (MA) and autoregressive moving average models (ARMA)), are no more suitable. Time series with sudden changes in statistical characteristics can cause various problems in the calibration of traditional water quality models and thus give biased predictions. Proper statistical analysis of these non-stationary and heteroscedastic time series with the aim of detecting and subsequently explaining the variations in their statistical characteristics requires the use of nonlinear time series models. This information can be then used to improve the model building and calibration of conceptual water quality model or to select right calibration periods in order to produce reliable predictions. The objective of this contribution is to analyze two long time series of nitrate concentrations of the rivers Ouse and Stour with advanced nonlinear statistical modelling techniques and compare their performance with traditional linear models of the ARMA class in order to identify changes in the time series characteristics. The time series were analysed with nonlinear models with multiple regimes represented by self-exciting threshold autoregressive (SETAR) and Markov-switching models (MSW). The analysis showed that, based on the value of residual sum of squares (RSS) in both datasets, SETAR and MSW models described the time-series better than models of the ARMA class. In most cases the relative improvement of SETAR models against AR models of first order was low ranging between 1% and 4% with the exception of the three-regime model for the River Stour time-series where the improvement was 48.9%. In comparison, the relative improvement of MSW models was between 44.6% and 52.5 for two-regime and from 60.4% to 75% for three-regime models. However, the visual assessment of models plotted against original datasets showed that despite a high value of RSS, some ARMA models could describe the analyzed time-series better than AR, MA and SETAR models with lower values of RSS. In both datasets MSW models provided a very good visual fit describing most of the extreme values.
Self-organising mixture autoregressive model for non-stationary time series modelling.
Ni, He; Yin, Hujun
2008-12-01
Modelling non-stationary time series has been a difficult task for both parametric and nonparametric methods. One promising solution is to combine the flexibility of nonparametric models with the simplicity of parametric models. In this paper, the self-organising mixture autoregressive (SOMAR) network is adopted as a such mixture model. It breaks time series into underlying segments and at the same time fits local linear regressive models to the clusters of segments. In such a way, a global non-stationary time series is represented by a dynamic set of local linear regressive models. Neural gas is used for a more flexible structure of the mixture model. Furthermore, a new similarity measure has been introduced in the self-organising network to better quantify the similarity of time series segments. The network can be used naturally in modelling and forecasting non-stationary time series. Experiments on artificial, benchmark time series (e.g. Mackey-Glass) and real-world data (e.g. numbers of sunspots and Forex rates) are presented and the results show that the proposed SOMAR network is effective and superior to other similar approaches.
A Rescorla-Wagner drift-diffusion model of conditioning and timing
Alonso, Eduardo
2017-01-01
Computational models of classical conditioning have made significant contributions to the theoretic understanding of associative learning, yet they still struggle when the temporal aspects of conditioning are taken into account. Interval timing models have contributed a rich variety of time representations and provided accurate predictions for the timing of responses, but they usually have little to say about associative learning. In this article we present a unified model of conditioning and timing that is based on the influential Rescorla-Wagner conditioning model and the more recently developed Timing Drift-Diffusion model. We test the model by simulating 10 experimental phenomena and show that it can provide an adequate account for 8, and a partial account for the other 2. We argue that the model can account for more phenomena in the chosen set than these other similar in scope models: CSC-TD, MS-TD, Learning to Time and Modular Theory. A comparison and analysis of the mechanisms in these models is provided, with a focus on the types of time representation and associative learning rule used. PMID:29095819
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
The Crossover Time as an Evaluation of Ocean Models Against Persistence
NASA Astrophysics Data System (ADS)
Phillipson, L. M.; Toumi, R.
2018-01-01
A new ocean evaluation metric, the crossover time, is defined as the time it takes for a numerical model to equal the performance of persistence. As an example, the average crossover time calculated using the Lagrangian separation distance (the distance between simulated trajectories and observed drifters) for the global MERCATOR ocean model analysis is found to be about 6 days. Conversely, the model forecast has an average crossover time longer than 6 days, suggesting limited skill in Lagrangian predictability by the current generation of global ocean models. The crossover time of the velocity error is less than 3 days, which is similar to the average decorrelation time of the observed drifters. The crossover time is a useful measure to quantify future ocean model improvements.
Resource utilization model for the algorithm to architecture mapping model
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Patel, Rakesh R.
1993-01-01
The analytical model for resource utilization and the variable node time and conditional node model for the enhanced ATAMM model for a real-time data flow architecture are presented in this research. The Algorithm To Architecture Mapping Model, ATAMM, is a Petri net based graph theoretic model developed at Old Dominion University, and is capable of modeling the execution of large-grained algorithms on a real-time data flow architecture. Using the resource utilization model, the resource envelope may be obtained directly from a given graph and, consequently, the maximum number of required resources may be evaluated. The node timing diagram for one iteration period may be obtained using the analytical resource envelope. The variable node time model, which describes the change in resource requirement for the execution of an algorithm under node time variation, is useful to expand the applicability of the ATAMM model to heterogeneous architectures. The model also describes a method of detecting the presence of resource limited mode and its subsequent prevention. Graphs with conditional nodes are shown to be reduced to equivalent graphs with time varying nodes and, subsequently, may be analyzed using the variable node time model to determine resource requirements. Case studies are performed on three graphs for the illustration of applicability of the analytical theories.
Joint modeling of longitudinal data and discrete-time survival outcome.
Qiu, Feiyou; Stein, Catherine M; Elston, Robert C
2016-08-01
A predictive joint shared parameter model is proposed for discrete time-to-event and longitudinal data. A discrete survival model with frailty and a generalized linear mixed model for the longitudinal data are joined to predict the probability of events. This joint model focuses on predicting discrete time-to-event outcome, taking advantage of repeated measurements. We show that the probability of an event in a time window can be more precisely predicted by incorporating the longitudinal measurements. The model was investigated by comparison with a two-step model and a discrete-time survival model. Results from both a study on the occurrence of tuberculosis and simulated data show that the joint model is superior to the other models in discrimination ability, especially as the latent variables related to both survival times and the longitudinal measurements depart from 0. © The Author(s) 2013.
Modeling seasonal variation of hip fracture in Montreal, Canada.
Modarres, Reza; Ouarda, Taha B M J; Vanasse, Alain; Orzanco, Maria Gabriela; Gosselin, Pierre
2012-04-01
The investigation of the association of the climate variables with hip fracture incidences is important in social health issues. This study examined and modeled the seasonal variation of monthly population based hip fracture rate (HFr) time series. The seasonal ARIMA time series modeling approach is used to model monthly HFr incidences time series of female and male patients of the ages 40-74 and 75+ of Montreal, Québec province, Canada, in the period of 1993-2004. The correlation coefficients between meteorological variables such as temperature, snow depth, rainfall depth and day length and HFr are significant. The nonparametric Mann-Kendall test for trend assessment and the nonparametric Levene's test and Wilcoxon's test for checking the difference of HFr before and after change point are also used. The seasonality in HFr indicated sharp difference between winter and summer time. The trend assessment showed decreasing trends in HFr of female and male groups. The nonparametric test also indicated a significant change of the mean HFr. A seasonal ARIMA model was applied for HFr time series without trend and a time trend ARIMA model (TT-ARIMA) was developed and fitted to HFr time series with a significant trend. The multi criteria evaluation showed the adequacy of SARIMA and TT-ARIMA models for modeling seasonal hip fracture time series with and without significant trend. In the time series analysis of HFr of the Montreal region, the effects of the seasonal variation of climate variables on hip fracture are clear. The Seasonal ARIMA model is useful for modeling HFr time series without trend. However, for time series with significant trend, the TT-ARIMA model should be applied for modeling HFr time series. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Henson, James M.; Reise, Steven P.; Kim, Kevin H.
2007-01-01
The accuracy of structural model parameter estimates in latent variable mixture modeling was explored with a 3 (sample size) [times] 3 (exogenous latent mean difference) [times] 3 (endogenous latent mean difference) [times] 3 (correlation between factors) [times] 3 (mixture proportions) factorial design. In addition, the efficacy of several…
A Flexible Latent Trait Model for Response Times in Tests
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jorg-Tobias
2012-01-01
Latent trait models for response times in tests have become popular recently. One challenge for response time modeling is the fact that the distribution of response times can differ considerably even in similar tests. In order to reduce the need for tailor-made models, a model is proposed that unifies two popular approaches to response time…
Evaluation of Fast-Time Wake Vortex Prediction Models
NASA Technical Reports Server (NTRS)
Proctor, Fred H.; Hamilton, David W.
2009-01-01
Current fast-time wake models are reviewed and three basic types are defined. Predictions from several of the fast-time models are compared. Previous statistical evaluations of the APA-Sarpkaya and D2P fast-time models are discussed. Root Mean Square errors between fast-time model predictions and Lidar wake measurements are examined for a 24 hr period at Denver International Airport. Shortcomings in current methodology for evaluating wake errors are also discussed.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
2012-01-01
Background Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models. Results We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases. Conclusions Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data. PMID:23151154
Rubinstein, Justin L.; Ellsworth, William L.; Chen, Kate Huihsuan; Uchida, Naoki
2012-01-01
The behavior of individual events in repeating earthquake sequences in California, Taiwan and Japan is better predicted by a model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. Given that repeating earthquakes are highly regular in both inter-event time and seismic moment, the time- and slip-predictable models seem ideally suited to explain their behavior. Taken together with evidence from the companion manuscript that shows similar results for laboratory experiments we conclude that the short-term predictions of the time- and slip-predictable models should be rejected in favor of earthquake models that assume either fixed slip or fixed recurrence interval. This implies that the elastic rebound model underlying the time- and slip-predictable models offers no additional value in describing earthquake behavior in an event-to-event sense, but its value in a long-term sense cannot be determined. These models likely fail because they rely on assumptions that oversimplify the earthquake cycle. We note that the time and slip of these events is predicted quite well by fixed slip and fixed recurrence models, so in some sense they are time- and slip-predictable. While fixed recurrence and slip models better predict repeating earthquake behavior than the time- and slip-predictable models, we observe a correlation between slip and the preceding recurrence time for many repeating earthquake sequences in Parkfield, California. This correlation is not found in other regions, and the sequences with the correlative slip-predictable behavior are not distinguishable from nearby earthquake sequences that do not exhibit this behavior.
Li, Yi; Chen, Yuren
2016-12-30
To make driving assistance system more humanized, this study focused on the prediction and assistance of drivers' perception-response time on mountain highway curves. Field tests were conducted to collect real-time driving data and driver vision information. A driver-vision lane model quantified curve elements in drivers' vision. A multinomial log-linear model was established to predict perception-response time with traffic/road environment information, driver-vision lane model, and mechanical status (last second). A corresponding assistance model showed a positive impact on drivers' perception-response times on mountain highway curves. Model results revealed that the driver-vision lane model and visual elements did have important influence on drivers' perception-response time. Compared with roadside passive road safety infrastructure, proper visual geometry design, timely visual guidance, and visual information integrality of a curve are significant factors for drivers' perception-response time.
Immersed boundary lattice Boltzmann model based on multiple relaxation times
NASA Astrophysics Data System (ADS)
Lu, Jianhua; Han, Haifeng; Shi, Baochang; Guo, Zhaoli
2012-01-01
As an alterative version of the lattice Boltzmann models, the multiple relaxation time (MRT) lattice Boltzmann model introduces much less numerical boundary slip than the single relaxation time (SRT) lattice Boltzmann model if some special relationship between the relaxation time parameters is chosen. On the other hand, most current versions of the immersed boundary lattice Boltzmann method, which was first introduced by Feng and improved by many other authors, suffer from numerical boundary slip as has been investigated by Le and Zhang. To reduce such a numerical boundary slip, an immerse boundary lattice Boltzmann model based on multiple relaxation times is proposed in this paper. A special formula is given between two relaxation time parameters in the model. A rigorous analysis and the numerical experiments carried out show that the numerical boundary slip reduces dramatically by using the present model compared to the single-relaxation-time-based model.
van Rosmalen, Joost; Toy, Mehlika; O'Mahony, James F
2013-08-01
Markov models are a simple and powerful tool for analyzing the health and economic effects of health care interventions. These models are usually evaluated in discrete time using cohort analysis. The use of discrete time assumes that changes in health states occur only at the end of a cycle period. Discrete-time Markov models only approximate the process of disease progression, as clinical events typically occur in continuous time. The approximation can yield biased cost-effectiveness estimates for Markov models with long cycle periods and if no half-cycle correction is made. The purpose of this article is to present an overview of methods for evaluating Markov models in continuous time. These methods use mathematical results from stochastic process theory and control theory. The methods are illustrated using an applied example on the cost-effectiveness of antiviral therapy for chronic hepatitis B. The main result is a mathematical solution for the expected time spent in each state in a continuous-time Markov model. It is shown how this solution can account for age-dependent transition rates and discounting of costs and health effects, and how the concept of tunnel states can be used to account for transition rates that depend on the time spent in a state. The applied example shows that the continuous-time model yields more accurate results than the discrete-time model but does not require much computation time and is easily implemented. In conclusion, continuous-time Markov models are a feasible alternative to cohort analysis and can offer several theoretical and practical advantages.
Liu, Zitao; Hauskrecht, Milos
2017-11-01
Building of an accurate predictive model of clinical time series for a patient is critical for understanding of the patient condition, its dynamics, and optimal patient management. Unfortunately, this process is not straightforward. First, patient-specific variations are typically large and population-based models derived or learned from many different patients are often unable to support accurate predictions for each individual patient. Moreover, time series observed for one patient at any point in time may be too short and insufficient to learn a high-quality patient-specific model just from the patient's own data. To address these problems we propose, develop and experiment with a new adaptive forecasting framework for building multivariate clinical time series models for a patient and for supporting patient-specific predictions. The framework relies on the adaptive model switching approach that at any point in time selects the most promising time series model out of the pool of many possible models, and consequently, combines advantages of the population, patient-specific and short-term individualized predictive models. We demonstrate that the adaptive model switching framework is very promising approach to support personalized time series prediction, and that it is able to outperform predictions based on pure population and patient-specific models, as well as, other patient-specific model adaptation strategies.
Safety analytics for integrating crash frequency and real-time risk modeling for expressways.
Wang, Ling; Abdel-Aty, Mohamed; Lee, Jaeyoung
2017-07-01
To find crash contributing factors, there have been numerous crash frequency and real-time safety studies, but such studies have been conducted independently. Until this point, no researcher has simultaneously analyzed crash frequency and real-time crash risk to test whether integrating them could better explain crash occurrence. Therefore, this study aims at integrating crash frequency and real-time safety analyses using expressway data. A Bayesian integrated model and a non-integrated model were built: the integrated model linked the crash frequency and the real-time models by adding the logarithm of the estimated expected crash frequency in the real-time model; the non-integrated model independently estimated the crash frequency and the real-time crash risk. The results showed that the integrated model outperformed the non-integrated model, as it provided much better model results for both the crash frequency and the real-time models. This result indicated that the added component, the logarithm of the expected crash frequency, successfully linked and provided useful information to the two models. This study uncovered few variables that are not typically included in the crash frequency analysis. For example, the average daily standard deviation of speed, which was aggregated based on speed at 1-min intervals, had a positive effect on crash frequency. In conclusion, this study suggested a methodology to improve the crash frequency and real-time models by integrating them, and it might inspire future researchers to understand crash mechanisms better. Copyright © 2017 Elsevier Ltd. All rights reserved.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
Fuzzy time-series based on Fibonacci sequence for stock price forecasting
NASA Astrophysics Data System (ADS)
Chen, Tai-Liang; Cheng, Ching-Hsue; Jong Teoh, Hia
2007-07-01
Time-series models have been utilized to make reasonably accurate predictions in the areas of stock price movements, academic enrollments, weather, etc. For promoting the forecasting performance of fuzzy time-series models, this paper proposes a new model, which incorporates the concept of the Fibonacci sequence, the framework of Song and Chissom's model and the weighted method of Yu's model. This paper employs a 5-year period TSMC (Taiwan Semiconductor Manufacturing Company) stock price data and a 13-year period of TAIEX (Taiwan Stock Exchange Capitalization Weighted Stock Index) stock index data as experimental datasets. By comparing our forecasting performances with Chen's (Forecasting enrollments based on fuzzy time-series. Fuzzy Sets Syst. 81 (1996) 311-319), Yu's (Weighted fuzzy time-series models for TAIEX forecasting. Physica A 349 (2004) 609-624) and Huarng's (The application of neural networks to forecast fuzzy time series. Physica A 336 (2006) 481-491) models, we conclude that the proposed model surpasses in accuracy these conventional fuzzy time-series models.
NASA Astrophysics Data System (ADS)
Kelleher, Christa A.; Shaw, Stephen B.
2018-02-01
Recent research has found that hydrologic modeling over decadal time periods often requires time variant model parameters. Most prior work has focused on assessing time variance in model parameters conceptualizing watershed features and functions. In this paper, we assess whether adding a time variant scalar to potential evapotranspiration (PET) can be used in place of time variant parameters. Using the HBV hydrologic model and four different simple but common PET methods (Hamon, Priestly-Taylor, Oudin, and Hargreaves), we simulated 60+ years of daily discharge on four rivers in New York state. Allowing all ten model parameters to vary in time achieved good model fits in terms of daily NSE and long-term water balance. However, allowing single model parameters to vary in time - including a scalar on PET - achieved nearly equivalent model fits across PET methods. Overall, varying a PET scalar in time is likely more physically consistent with known biophysical controls on PET as compared to varying parameters conceptualizing innate watershed properties related to soil properties such as wilting point and field capacity. This work suggests that the seeming need for time variance in innate watershed parameters may be due to overly simple evapotranspiration formulations that do not account for all factors controlling evapotranspiration over long time periods.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Takemura, Kazuhisa; Murakami, Hajime
2016-01-01
A probability weighting function (w(p)) is considered to be a nonlinear function of probability (p) in behavioral decision theory. This study proposes a psychophysical model of probability weighting functions derived from a hyperbolic time discounting model and a geometric distribution. The aim of the study is to show probability weighting functions from the point of view of waiting time for a decision maker. Since the expected value of a geometrically distributed random variable X is 1/p, we formulized the probability weighting function of the expected value model for hyperbolic time discounting as w(p) = (1 - k log p)(-1). Moreover, the probability weighting function is derived from Loewenstein and Prelec's (1992) generalized hyperbolic time discounting model. The latter model is proved to be equivalent to the hyperbolic-logarithmic weighting function considered by Prelec (1998) and Luce (2001). In this study, we derive a model from the generalized hyperbolic time discounting model assuming Fechner's (1860) psychophysical law of time and a geometric distribution of trials. In addition, we develop median models of hyperbolic time discounting and generalized hyperbolic time discounting. To illustrate the fitness of each model, a psychological experiment was conducted to assess the probability weighting and value functions at the level of the individual participant. The participants were 50 university students. The results of individual analysis indicated that the expected value model of generalized hyperbolic discounting fitted better than previous probability weighting decision-making models. The theoretical implications of this finding are discussed.
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-01-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time rescaling theorem provides a goodness of fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model’s spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies upon assumptions of continuously defined time and instantaneous events. However spikes have finite width and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time rescaling theorem which analytically corrects for the effects of finite resolution. This allows us to define a rescaled time which is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting Generalized Linear Models (GLMs) to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false positive rate of the KS test and greatly increasing the reliability of model evaluation based upon the time rescaling theorem. PMID:20608868
Local Stability of AIDS Epidemic Model Through Treatment and Vertical Transmission with Time Delay
NASA Astrophysics Data System (ADS)
Novi W, Cascarilla; Lestari, Dwi
2016-02-01
This study aims to explain stability of the spread of AIDS through treatment and vertical transmission model. Human with HIV need a time to positively suffer AIDS. The existence of a time, human with HIV until positively suffer AIDS can be delayed for a time so that the model acquired is the model with time delay. The model form is a nonlinear differential equation with time delay, SIPTA (susceptible-infected-pre AIDS-treatment-AIDS). Based on SIPTA model analysis results the disease free equilibrium point and the endemic equilibrium point. The disease free equilibrium point with and without time delay are local asymptotically stable if the basic reproduction number is less than one. The endemic equilibrium point will be local asymptotically stable if the time delay is less than the critical value of delay, unstable if the time delay is more than the critical value of delay, and bifurcation occurs if the time delay is equal to the critical value of delay.
The Spectrum of Mathematical Models.
ERIC Educational Resources Information Center
Karplus, Walter J.
1983-01-01
Mathematical modeling problems encountered in many disciplines are discussed in terms of the modeling process and applications of models. The models are classified according to three types of abstraction: continuous-space-continuous-time, discrete-space-continuous-time, and discrete-space-discrete-time. Limitations in different kinds of modeling…
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Sensitivity of tsunami evacuation modeling to direction and land cover assumptions
Schmidtlein, Mathew C.; Wood, Nathan J.
2015-01-01
Although anisotropic least-cost-distance (LCD) modeling is becoming a common tool for estimating pedestrian-evacuation travel times out of tsunami hazard zones, there has been insufficient attention paid to understanding model sensitivity behind the estimates. To support tsunami risk-reduction planning, we explore two aspects of LCD modeling as it applies to pedestrian evacuations and use the coastal community of Seward, Alaska, as our case study. First, we explore the sensitivity of modeling to the direction of movement by comparing standard safety-to-hazard evacuation times to hazard-to-safety evacuation times for a sample of 3985 points in Seward's tsunami-hazard zone. Safety-to-hazard evacuation times slightly overestimated hazard-to-safety evacuation times but the strong relationship to the hazard-to-safety evacuation times, slightly conservative bias, and shorter processing times of the safety-to-hazard approach make it the preferred approach. Second, we explore how variations in land cover speed conservation values (SCVs) influence model performance using a Monte Carlo approach with one thousand sets of land cover SCVs. The LCD model was relatively robust to changes in land cover SCVs with the magnitude of local model sensitivity greatest in areas with higher evacuation times or with wetland or shore land cover types, where model results may slightly underestimate travel times. This study demonstrates that emergency managers should be concerned not only with populations in locations with evacuation times greater than wave arrival times, but also with populations with evacuation times lower than but close to expected wave arrival times, particularly if they are required to cross wetlands or beaches.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory
Mousavi, Ali; Clarkson, P. John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation. PMID:27170899
An Integrated Model of Patient and Staff Satisfaction Using Queuing Theory.
Komashie, Alexander; Mousavi, Ali; Clarkson, P John; Young, Terry
2015-01-01
This paper investigates the connection between patient satisfaction, waiting time, staff satisfaction, and service time. It uses a variety of models to enable improvement against experiential and operational health service goals. Patient satisfaction levels are estimated using a model based on waiting (waiting times). Staff satisfaction levels are estimated using a model based on the time spent with patients (service time). An integrated model of patient and staff satisfaction, the effective satisfaction level model, is then proposed (using queuing theory). This links patient satisfaction, waiting time, staff satisfaction, and service time, connecting two important concepts, namely, experience and efficiency in care delivery and leading to a more holistic approach in designing and managing health services. The proposed model will enable healthcare systems analysts to objectively and directly relate elements of service quality to capacity planning. Moreover, as an instrument used jointly by healthcare commissioners and providers, it affords the prospect of better resource allocation.
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
2017-07-14
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.
A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed bymore » simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.« less
Ignoring Individual Differences in Times of Assessment in Growth Curve Modeling
ERIC Educational Resources Information Center
Coulombe, Patrick; Selig, James P.; Delaney, Harold D.
2016-01-01
Researchers often collect longitudinal data to model change over time in a phenomenon of interest. Inevitably, there will be some variation across individuals in specific time intervals between assessments. In this simulation study of growth curve modeling, we investigate how ignoring individual differences in time points when modeling change over…
NASA Astrophysics Data System (ADS)
Liang, Yingjie; Chen, Wen
2018-04-01
The mean squared displacement (MSD) of the traditional ultraslow diffusion is a logarithmic function of time. Recently, the continuous time random walk model is employed to characterize this ultraslow diffusion dynamics by connecting the heavy-tailed logarithmic function and its variation as the asymptotical waiting time density. In this study we investigate the limiting waiting time density of a general ultraslow diffusion model via the inverse Mittag-Leffler function, whose special case includes the traditional logarithmic ultraslow diffusion model. The MSD of the general ultraslow diffusion model is analytically derived as an inverse Mittag-Leffler function, and is observed to increase even more slowly than that of the logarithmic function model. The occurrence of very long waiting time in the case of the inverse Mittag-Leffler function has the largest probability compared with the power law model and the logarithmic function model. The Monte Carlo simulations of one dimensional sample path of a single particle are also performed. The results show that the inverse Mittag-Leffler waiting time density is effective in depicting the general ultraslow random motion.
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support
NASA Astrophysics Data System (ADS)
Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar
This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.
Nonstandard working schedules and health: the systematic search for a comprehensive model.
Merkus, Suzanne L; Holte, Kari Anne; Huysmans, Maaike A; van Mechelen, Willem; van der Beek, Allard J
2015-10-23
Theoretical models on shift work fall short of describing relevant health-related pathways associated with the broader concept of nonstandard working schedules. Shift work models neither combine relevant working time characteristics applicable to nonstandard schedules nor include the role of rest periods and recovery in the development of health complaints. Therefore, this paper aimed to develop a comprehensive model on nonstandard working schedules to address these shortcomings. A literature review was conducted using a systematic search and selection process. Two searches were performed: one associating the working time characteristics time-of-day and working time duration with health and one associating recovery after work with health. Data extracted from the models were used to develop a comprehensive model on nonstandard working schedules and health. For models on the working time characteristics, the search strategy yielded 3044 references, of which 26 met the inclusion criteria that contained 22 distinctive models. For models on recovery after work, the search strategy yielded 896 references, of which seven met the inclusion criteria containing seven distinctive models. Of the models on the working time characteristics, three combined time-of-day with working time duration, 18 were on time-of-day (i.e. shift work), and one was on working time duration. The model developed in the paper has a comprehensive approach to working hours and other work-related risk factors and proposes that they should be balanced by positive non-work factors to maintain health. Physiological processes leading to health complaints are circadian disruption, sleep deprivation, and activation that should be counterbalanced by (re-)entrainment, restorative sleep, and recovery, respectively, to maintain health. A comprehensive model on nonstandard working schedules and health was developed. The model proposes that work and non-work as well as their associated physiological processes need to be balanced to maintain good health. The model gives researchers a useful overview over the various risk factors and pathways associated with health that should be considered when studying any form of nonstandard working schedule.
Modeling stream fish distributions using interval-censored detection times.
Ferreira, Mário; Filipe, Ana Filipa; Bardos, David C; Magalhães, Maria Filomena; Beja, Pedro
2016-08-01
Controlling for imperfect detection is important for developing species distribution models (SDMs). Occupancy-detection models based on the time needed to detect a species can be used to address this problem, but this is hindered when times to detection are not known precisely. Here, we extend the time-to-detection model to deal with detections recorded in time intervals and illustrate the method using a case study on stream fish distribution modeling. We collected electrofishing samples of six fish species across a Mediterranean watershed in Northeast Portugal. Based on a Bayesian hierarchical framework, we modeled the probability of water presence in stream channels, and the probability of species occupancy conditional on water presence, in relation to environmental and spatial variables. We also modeled time-to-first detection conditional on occupancy in relation to local factors, using modified interval-censored exponential survival models. Posterior distributions of occupancy probabilities derived from the models were used to produce species distribution maps. Simulations indicated that the modified time-to-detection model provided unbiased parameter estimates despite interval-censoring. There was a tendency for spatial variation in detection rates to be primarily influenced by depth and, to a lesser extent, stream width. Species occupancies were consistently affected by stream order, elevation, and annual precipitation. Bayesian P-values and AUCs indicated that all models had adequate fit and high discrimination ability, respectively. Mapping of predicted occupancy probabilities showed widespread distribution by most species, but uncertainty was generally higher in tributaries and upper reaches. The interval-censored time-to-detection model provides a practical solution to model occupancy-detection when detections are recorded in time intervals. This modeling framework is useful for developing SDMs while controlling for variation in detection rates, as it uses simple data that can be readily collected by field ecologists.
Time prediction of failure a type of lamps by using general composite hazard rate model
NASA Astrophysics Data System (ADS)
Riaman; Lesmana, E.; Subartini, B.; Supian, S.
2018-03-01
This paper discusses the basic survival model estimates to obtain the average predictive value of lamp failure time. This estimate is for the parametric model, General Composite Hazard Level Model. The random time variable model used is the exponential distribution model, as the basis, which has a constant hazard function. In this case, we discuss an example of survival model estimation for a composite hazard function, using an exponential model as its basis. To estimate this model is done by estimating model parameters, through the construction of survival function and empirical cumulative function. The model obtained, will then be used to predict the average failure time of the model, for the type of lamp. By grouping the data into several intervals and the average value of failure at each interval, then calculate the average failure time of a model based on each interval, the p value obtained from the tes result is 0.3296.
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
Background The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. Methods In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). Results The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Conclusion Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis. PMID:26901682
Time Series Modelling of Syphilis Incidence in China from 2005 to 2012.
Zhang, Xingyu; Zhang, Tao; Pei, Jiao; Liu, Yuanyuan; Li, Xiaosong; Medrano-Gracia, Pau
2016-01-01
The infection rate of syphilis in China has increased dramatically in recent decades, becoming a serious public health concern. Early prediction of syphilis is therefore of great importance for heath planning and management. In this paper, we analyzed surveillance time series data for primary, secondary, tertiary, congenital and latent syphilis in mainland China from 2005 to 2012. Seasonality and long-term trend were explored with decomposition methods. Autoregressive integrated moving average (ARIMA) was used to fit a univariate time series model of syphilis incidence. A separate multi-variable time series for each syphilis type was also tested using an autoregressive integrated moving average model with exogenous variables (ARIMAX). The syphilis incidence rates have increased three-fold from 2005 to 2012. All syphilis time series showed strong seasonality and increasing long-term trend. Both ARIMA and ARIMAX models fitted and estimated syphilis incidence well. All univariate time series showed highest goodness-of-fit results with the ARIMA(0,0,1)×(0,1,1) model. Time series analysis was an effective tool for modelling the historical and future incidence of syphilis in China. The ARIMAX model showed superior performance than the ARIMA model for the modelling of syphilis incidence. Time series correlations existed between the models for primary, secondary, tertiary, congenital and latent syphilis.
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
Consequences of Base Time for Redundant Signals Experiments
Townsend, James T.; Honey, Christopher
2007-01-01
We report analytical and computational investigations into the effects of base time on the diagnosticity of two popular theoretical tools in the redundant signals literature: (1) the race model inequality and (2) the capacity coefficient. We show analytically and without distributional assumptions that the presence of base time decreases the sensitivity of both of these measures to model violations. We further use simulations to investigate the statistical power model selection tools based on the race model inequality, both with and without base time. Base time decreases statistical power, and biases the race model test toward conservatism. The magnitude of this biasing effect increases as we increase the proportion of total reaction time variance contributed by base time. We marshal empirical evidence to suggest that the proportion of reaction time variance contributed by base time is relatively small, and that the effects of base time on the diagnosticity of our model-selection tools are therefore likely to be minor. However, uncertainty remains concerning the magnitude and even the definition of base time. Experimentalists should continue to be alert to situations in which base time may contribute a large proportion of the total reaction time variance. PMID:18670591
Predictors of nursing home residents' time to hospitalization.
O'Malley, A James; Caudry, Daryl J; Grabowski, David C
2011-02-01
To model the predictors of the time to first acute hospitalization for nursing home residents, and accounting for previous hospitalizations, model the predictors of time between subsequent hospitalizations. Merged file from New York State for the period 1998-2004 consisting of nursing home information from the minimum dataset and hospitalization information from the Statewide Planning and Research Cooperative System. Accelerated failure time models were used to estimate the model parameters and predict survival times. The models were fit to observations from 50 percent of the nursing homes and validated on the remaining observations. Pressure ulcers and facility-level deficiencies were associated with a decreased time to first hospitalization, while the presence of advance directives and facility staffing was associated with an increased time. These predictors of the time to first hospitalization model had effects of similar magnitude in predicting the time between subsequent hospitalizations. This study provides novel evidence suggesting modifiable patient and nursing home characteristics are associated with the time to first hospitalization and time to subsequent hospitalizations for nursing home residents. © Health Research and Educational Trust.
Comparing an annual and daily time-step model for predicting field-scale phosphorus loss
USDA-ARS?s Scientific Manuscript database
Numerous models exist for describing phosphorus (P) losses from agricultural fields. The complexity of these models varies considerably ranging from simple empirically-based annual time-step models to more complex process-based daily time step models. While better accuracy is often assumed with more...
Chan, Kwun Chuen Gary; Wang, Mei-Cheng
2017-01-01
Recurrent event processes with marker measurements are mostly and largely studied with forward time models starting from an initial event. Interestingly, the processes could exhibit important terminal behavior during a time period before occurrence of the failure event. A natural and direct way to study recurrent events prior to a failure event is to align the processes using the failure event as the time origin and to examine the terminal behavior by a backward time model. This paper studies regression models for backward recurrent marker processes by counting time backward from the failure event. A three-level semiparametric regression model is proposed for jointly modeling the time to a failure event, the backward recurrent event process, and the marker observed at the time of each backward recurrent event. The first level is a proportional hazards model for the failure time, the second level is a proportional rate model for the recurrent events occurring before the failure event, and the third level is a proportional mean model for the marker given the occurrence of a recurrent event backward in time. By jointly modeling the three components, estimating equations can be constructed for marked counting processes to estimate the target parameters in the three-level regression models. Large sample properties of the proposed estimators are studied and established. The proposed models and methods are illustrated by a community-based AIDS clinical trial to examine the terminal behavior of frequencies and severities of opportunistic infections among HIV infected individuals in the last six months of life.
Perperoglou, Aris
2016-12-10
Flexible survival models are in need when modelling data from long term follow-up studies. In many cases, the assumption of proportionality imposed by a Cox model will not be valid. Instead, a model that can identify time varying effects of fixed covariates can be used. Although there are several approaches that deal with this problem, it is not always straightforward how to choose which covariates should be modelled having time varying effects and which not. At the same time, it is up to the researcher to define appropriate time functions that describe the dynamic pattern of the effects. In this work, we suggest a model that can deal with both fixed and time varying effects and uses simple hypotheses tests to distinguish which covariates do have dynamic effects. The model is an extension of the parsimonious reduced rank model of rank 1. As such, the number of parameters is kept low, and thus, a flexible set of time functions, such as b-splines, can be used. The basic theory is illustrated along with an efficient fitting algorithm. The proposed method is applied to a dataset of breast cancer patients and compared with a multivariate fractional polynomials approach for modelling time-varying effects. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Hu, Wenfa; He, Xinhua
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.
Borup, Morten; Grum, Morten; Mikkelsen, Peter Steen
2013-01-01
When an online runoff model is updated from system measurements, the requirements of the precipitation input change. Using rain gauge data as precipitation input there will be a displacement between the time when the rain hits the gauge and the time where the rain hits the actual catchment, due to the time it takes for the rain cell to travel from the rain gauge to the catchment. Since this time displacement is not present for system measurements the data assimilation scheme might already have updated the model to include the impact from the particular rain cell when the rain data is forced upon the model, which therefore will end up including the same rain twice in the model run. This paper compares forecast accuracy of updated models when using time displaced rain input to that of rain input with constant biases. This is done using a simple time-area model and historic rain series that are either displaced in time or affected with a bias. The results show that for a 10 minute forecast, time displacements of 5 and 10 minutes compare to biases of 60 and 100%, respectively, independent of the catchments time of concentration.
NASA Astrophysics Data System (ADS)
Mattern, Nancy; Schau, Candace
2002-04-01
Four causal models describing the longitudinal relationships between attitudes and achievement have been proposed in the literature. These models feature: (a) cross-effects over time between attitudes and achievement, (b) influence of achievement predominant over time, (c) influence of attitudes predominant over time, or (d) no cross-effects over time between attitudes and achievement. In an examin-ation of the causal relationships over time between attitudes toward science and science achievement for White rural seventh- and eighth-grade students, the cross-effects model was the best fitting model form for students overall. However, when examined by gender, the no cross-effects model exhibited the most accurate fit for White rural middle-school girls, whereas a new model called the no attitudes-path model exhibited the best fit for these boys.
A Box-Cox normal model for response times.
Klein Entink, R H; van der Linden, W J; Fox, J-P
2009-11-01
The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.
NASA Astrophysics Data System (ADS)
Meng, Qingxin; Hu, Xiangyun; Pan, Heping; Xi, Yufei
2018-04-01
We propose an algorithm for calculating all-time apparent resistivity from transient electromagnetic induction logging. The algorithm is based on the whole-space transient electric field expression of the uniform model and Halley's optimisation. In trial calculations for uniform models, the all-time algorithm is shown to have high accuracy. We use the finite-difference time-domain method to simulate the transient electromagnetic field in radial two-layer models without wall rock and convert the simulation results to apparent resistivity using the all-time algorithm. The time-varying apparent resistivity reflects the radially layered geoelectrical structure of the models and the apparent resistivity of the earliest time channel follows the true resistivity of the inner layer; however, the apparent resistivity at larger times reflects the comprehensive electrical characteristics of the inner and outer layers. To accurately identify the outer layer resistivity based on the series relationship model of the layered resistance, the apparent resistivity and diffusion depth of the different time channels are approximately replaced by related model parameters; that is, we propose an apparent resistivity correction algorithm. By correcting the time-varying apparent resistivity of radial two-layer models, we show that the correction results reflect the radially layered electrical structure and the corrected resistivities of the larger time channels follow the outer layer resistivity. The transient electromagnetic fields of radially layered models with wall rock are simulated to obtain the 2D time-varying profiles of the apparent resistivity and corrections. The results suggest that the time-varying apparent resistivity and correction results reflect the vertical and radial geoelectrical structures. For models with small wall-rock effect, the correction removes the effect of the low-resistance inner layer on the apparent resistivity of the larger time channels.
Strum, David P; May, Jerrold H; Sampson, Allan R; Vargas, Luis G; Spangler, William E
2003-01-01
Variability inherent in the duration of surgical procedures complicates surgical scheduling. Modeling the duration and variability of surgeries might improve time estimates. Accurate time estimates are important operationally to improve utilization, reduce costs, and identify surgeries that might be considered outliers. Surgeries with multiple procedures are difficult to model because they are difficult to segment into homogenous groups and because they are performed less frequently than single-procedure surgeries. The authors studied, retrospectively, 10,740 surgeries each with exactly two CPTs and 46,322 surgical cases with only one CPT from a large teaching hospital to determine if the distribution of dual-procedure surgery times fit more closely a lognormal or a normal model. The authors tested model goodness of fit to their data using Shapiro-Wilk tests, studied factors affecting the variability of time estimates, and examined the impact of coding permutations (ordered combinations) on modeling. The Shapiro-Wilk tests indicated that the lognormal model is statistically superior to the normal model for modeling dual-procedure surgeries. Permutations of component codes did not appear to differ significantly with respect to total procedure time and surgical time. To improve individual models for infrequent dual-procedure surgeries, permutations may be reduced and estimates may be based on the longest component procedure and type of anesthesia. The authors recommend use of the lognormal model for estimating surgical times for surgeries with two component procedures. Their results help legitimize the use of log transforms to normalize surgical procedure times prior to hypothesis testing using linear statistical models. Multiple-procedure surgeries may be modeled using the longest (statistically most important) component procedure and type of anesthesia.
Parametric, nonparametric and parametric modelling of a chaotic circuit time series
NASA Astrophysics Data System (ADS)
Timmer, J.; Rust, H.; Horbelt, W.; Voss, H. U.
2000-09-01
The determination of a differential equation underlying a measured time series is a frequently arising task in nonlinear time series analysis. In the validation of a proposed model one often faces the dilemma that it is hard to decide whether possible discrepancies between the time series and model output are caused by an inappropriate model or by bad estimates of parameters in a correct type of model, or both. We propose a combination of parametric modelling based on Bock's multiple shooting algorithm and nonparametric modelling based on optimal transformations as a strategy to test proposed models and if rejected suggest and test new ones. We exemplify this strategy on an experimental time series from a chaotic circuit where we obtain an extremely accurate reconstruction of the observed attractor.
NASA Astrophysics Data System (ADS)
Ha, Taesung
A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.
NASA Astrophysics Data System (ADS)
Engdahl, N.
2017-12-01
Backward in time (BIT) simulations of passive tracers are often used for capture zone analysis, source area identification, and generation of travel time and age distributions. The BIT approach has the potential to become an immensely powerful tool for direct inverse modeling but the necessary relationships between the processes modeled in the forward and backward models have yet to be formally established. This study explores the time reversibility of passive and reactive transport models in a variety of 2D heterogeneous domains using particle-based random walk methods for the transport and nonlinear reaction steps. Distributed forward models are used to generate synthetic observations that form the initial conditions for the backward in time models and we consider both linear-flood and point injections. The results for passive travel time distributions show that forward and backward models are not exactly equivalent but that the linear-flood BIT models are reasonable approximations. Point based BIT models fall within the travel time range of the forward models, though their distributions can be distinctive in some cases. The BIT approximation is not as robust when nonlinear reactive transport is considered and we find that this reaction system is only exactly reversible under uniform flow conditions. We use a series of simplified, longitudinally symmetric, but heterogeneous, domains to illustrate the causes of these discrepancies between the two model types. Many of the discrepancies arise because diffusion is a "self-adjoint" operator, which causes mass to spread in the forward and backward models. This allows particles to enter low velocity regions in the both models, which has opposite effects in the forward and reverse models. It may be possible to circumvent some of these limitations using an anti-diffusion model to undo mixing when time is reversed, but this is beyond the capabilities of the existing Lagrangian methods.
Qin, Shanlin; Liu, Fawang; Turner, Ian W; Yu, Qiang; Yang, Qianqian; Vegh, Viktor
2017-04-01
To study the utility of fractional calculus in modeling gradient-recalled echo MRI signal decay in the normal human brain. We solved analytically the extended time-fractional Bloch equations resulting in five model parameters, namely, the amplitude, relaxation rate, order of the time-fractional derivative, frequency shift, and constant offset. Voxel-level temporal fitting of the MRI signal was performed using the classical monoexponential model, a previously developed anomalous relaxation model, and using our extended time-fractional relaxation model. Nine brain regions segmented from multiple echo gradient-recalled echo 7 Tesla MRI data acquired from five participants were then used to investigate the characteristics of the extended time-fractional model parameters. We found that the extended time-fractional model is able to fit the experimental data with smaller mean squared error than the classical monoexponential relaxation model and the anomalous relaxation model, which do not account for frequency shift. We were able to fit multiple echo time MRI data with high accuracy using the developed model. Parameters of the model likely capture information on microstructural and susceptibility-induced changes in the human brain. Magn Reson Med 77:1485-1494, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh
1993-01-01
This paper presents an algorithm to identify a state-space model of a linear system using a backward-time approach. The procedure consists of three basic steps. First, the Markov parameters of a backward-time observer are computed from experimental input-output data. Second, the backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) from which a backward-time state-space model is realized using the Eigensystem Realization Algorithm. Third, the obtained backward-time state space model is converted to the usual forward-time representation. Stochastic properties of this approach will be discussed. Experimental results are given to illustrate when and to what extent this concept works.
Wu, Liejun; Chen, Maoxue; Chen, Yongli; Li, Qing X.
2013-01-01
The gas holdup time (tM) is a dominant parameter in gas chromatographic retention models. The difference equation (DE) model proposed by Wu et al. (J. Chromatogr. A 2012, http://dx.doi.org/10.1016/j.chroma.2012.07.077) excluded tM. In the present paper, we propose that the relationship between the adjusted retention time tRZ′ and carbon number z of n-alkanes follows a quadratic equation (QE) when an accurate tM is obtained. This QE model is the same as or better than the DE model for an accurate expression of the retention behavior of n-alkanes and model applications. The QE model covers a larger range of n-alkanes with better curve fittings than the linear model. The accuracy of the QE model was approximately 2–6 times better than the DE model and 18–540 times better than the LE model. Standard deviations of the QE model were approximately 2–3 times smaller than those of the DE model. PMID:22989489
Time on Your Hands: Modeling Time
ERIC Educational Resources Information Center
Finson, Kevin; Beaver, John
2007-01-01
Building physical models relative to a concept can be an important activity to help students develop and manipulate abstract ideas and mental models that often prove difficult to grasp. One such concept is "time". A method for helping students understand the cyclical nature of time involves the construction of a Time Zone Calculator through a…
Hot-bench simulation of the active flexible wing wind-tunnel model
NASA Technical Reports Server (NTRS)
Buttrill, Carey S.; Houck, Jacob A.
1990-01-01
Two simulations, one batch and one real-time, of an aeroelastically-scaled wind-tunnel model were developed. The wind-tunnel model was a full-span, free-to-roll model of an advanced fighter concept. The batch simulation was used to generate and verify the real-time simulation and to test candidate control laws prior to implementation. The real-time simulation supported hot-bench testing of a digital controller, which was developed to actively control the elastic deformation of the wind-tunnel model. Time scaling was required for hot-bench testing. The wind-tunnel model, the mathematical models for the simulations, the techniques employed to reduce the hot-bench time-scale factors, and the verification procedures are described.
Modelling of Vortex-Induced Loading on a Single-Blade Installation Setup
NASA Astrophysics Data System (ADS)
Skrzypiński, Witold; Gaunaa, Mac; Heinz, Joachim
2016-09-01
Vortex-induced integral loading fluctuations on a single suspended blade at various inflow angles were modeled in the presents work by means of stochastic modelling methods. The reference time series were obtained by 3D DES CFD computations carried out on the DTU 10MW reference wind turbine blade. In the reference time series, the flapwise force component, Fx, showed both higher absolute values and variation than the chordwise force component, Fz, for every inflow angle considered. For this reason, the present paper focused on modelling of the Fx and not the Fz whereas Fz would be modelled using exactly the same procedure. The reference time series were significantly different, depending on the inflow angle. This made the modelling of all the time series with a single and relatively simple engineering model challenging. In order to find model parameters, optimizations were carried out, based on the root-mean-square error between the Single-Sided Amplitude Spectra of the reference and modelled time series. In order to model well defined frequency peaks present at certain inflow angles, optimized sine functions were superposed on the stochastically modelled time series. The results showed that the modelling accuracy varied depending on the inflow angle. None the less, the modelled and reference time series showed a satisfactory general agreement in terms of their visual and frequency characteristics. This indicated that the proposed method is suitable to model loading fluctuations on suspended blades.
NASA Astrophysics Data System (ADS)
Rubinstein, Justin L.; Ellsworth, William L.; Beeler, Nicholas M.; Kilgore, Brian D.; Lockner, David A.; Savage, Heather M.
2012-02-01
The behavior of individual stick-slip events observed in three different laboratory experimental configurations is better explained by a "memoryless" earthquake model with fixed inter-event time or fixed slip than it is by the time- and slip-predictable models for earthquake occurrence. We make similar findings in the companion manuscript for the behavior of natural repeating earthquakes. Taken together, these results allow us to conclude that the predictions of a characteristic earthquake model that assumes either fixed slip or fixed recurrence interval should be preferred to the predictions of the time- and slip-predictable models for all earthquakes. Given that the fixed slip and recurrence models are the preferred models for all of the experiments we examine, we infer that in an event-to-event sense the elastic rebound model underlying the time- and slip-predictable models does not explain earthquake behavior. This does not indicate that the elastic rebound model should be rejected in a long-term-sense, but it should be rejected for short-term predictions. The time- and slip-predictable models likely offer worse predictions of earthquake behavior because they rely on assumptions that are too simple to explain the behavior of earthquakes. Specifically, the time-predictable model assumes a constant failure threshold and the slip-predictable model assumes that there is a constant minimum stress. There is experimental and field evidence that these assumptions are not valid for all earthquakes.
Wolfs, Vincent; Villazon, Mauricio Florencio; Willems, Patrick
2013-01-01
Applications such as real-time control, uncertainty analysis and optimization require an extensive number of model iterations. Full hydrodynamic sewer models are not sufficient for these applications due to the excessive computation time. Simplifications are consequently required. A lumped conceptual modelling approach results in a much faster calculation. The process of identifying and calibrating the conceptual model structure could, however, be time-consuming. Moreover, many conceptual models lack accuracy, or do not account for backwater effects. To overcome these problems, a modelling methodology was developed which is suited for semi-automatic calibration. The methodology is tested for the sewer system of the city of Geel in the Grote Nete river basin in Belgium, using both synthetic design storm events and long time series of rainfall input. A MATLAB/Simulink(®) tool was developed to guide the modeller through the step-wise model construction, reducing significantly the time required for the conceptual modelling process.
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
Lindsey, J C; Ryan, L M
1994-01-01
The three-state illness-death model provides a useful way to characterize data from a rodent tumorigenicity experiment. Most parametrizations proposed recently in the literature assume discrete time for the death process and either discrete or continuous time for the tumor onset process. We compare these approaches with a third alternative that uses a piecewise continuous model on the hazards for tumor onset and death. All three models assume proportional hazards to characterize tumor lethality and the effect of dose on tumor onset and death rate. All of the models can easily be fitted using an Expectation Maximization (EM) algorithm. The piecewise continuous model is particularly appealing in this context because the complete data likelihood corresponds to a standard piecewise exponential model with tumor presence as a time-varying covariate. It can be shown analytically that differences between the parameter estimates given by each model are explained by varying assumptions about when tumor onsets, deaths, and sacrifices occur within intervals. The mixed-time model is seen to be an extension of the grouped data proportional hazards model [Mutat. Res. 24:267-278 (1981)]. We argue that the continuous-time model is preferable to the discrete- and mixed-time models because it gives reasonable estimates with relatively few intervals while still making full use of the available information. Data from the ED01 experiment illustrate the results. PMID:8187731
A travel time forecasting model based on change-point detection method
NASA Astrophysics Data System (ADS)
LI, Shupeng; GUANG, Xiaoping; QIAN, Yongsheng; ZENG, Junwei
2017-06-01
Travel time parameters obtained from road traffic sensors data play an important role in traffic management practice. A travel time forecasting model is proposed for urban road traffic sensors data based on the method of change-point detection in this paper. The first-order differential operation is used for preprocessing over the actual loop data; a change-point detection algorithm is designed to classify the sequence of large number of travel time data items into several patterns; then a travel time forecasting model is established based on autoregressive integrated moving average (ARIMA) model. By computer simulation, different control parameters are chosen for adaptive change point search for travel time series, which is divided into several sections of similar state.Then linear weight function is used to fit travel time sequence and to forecast travel time. The results show that the model has high accuracy in travel time forecasting.
Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics
2016-02-29
Performance/Technic~ 02-01-2016- 02-29-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of Neuronal...neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this approach, time step can be made...propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for studies of large-scale neuronal
Hazard Regression Models of Early Mortality in Trauma Centers
Clark, David E; Qian, Jing; Winchell, Robert J; Betensky, Rebecca A
2013-01-01
Background Factors affecting early hospital deaths after trauma may be different from factors affecting later hospital deaths, and the distribution of short and long prehospital times may vary among hospitals. Hazard regression (HR) models may therefore be more useful than logistic regression (LR) models for analysis of trauma mortality, especially when treatment effects at different time points are of interest. Study Design We obtained data for trauma center patients from the 2008–9 National Trauma Data Bank (NTDB). Cases were included if they had complete data for prehospital times, hospital times, survival outcome, age, vital signs, and severity scores. Cases were excluded if pulseless on admission, transferred in or out, or ISS<9. Using covariates proposed for the Trauma Quality Improvement Program and an indicator for each hospital, we compared LR models predicting survival at 8 hours after injury to HR models with survival censored at 8 hours. HR models were then modified to allow time-varying hospital effects. Results 85,327 patients in 161 hospitals met inclusion criteria. Crude hazards peaked initially, then steadily declined. When hazard ratios were assumed constant in HR models, they were similar to odds ratios in LR models associating increased mortality with increased age, firearm mechanism, increased severity, more deranged physiology, and estimated hospital-specific effects. However, when hospital effects were allowed to vary by time, HR models demonstrated that hospital outliers were not the same at different times after injury. Conclusions HR models with time-varying hazard ratios reveal inconsistencies in treatment effects, data quality, and/or timing of early death among trauma centers. HR models are generally more flexible than LR models, can be adapted for censored data, and potentially offer a better tool for analysis of factors affecting early death after injury. PMID:23036828
NASA Technical Reports Server (NTRS)
Stewart, Richard W.; Thompson, Anne M.; Owens, Melody A.; Herwehe, Jerold A.
1989-01-01
A major tropospheric loss of soluble species such as nitric acid results from scavenging by water droplets. Several theoretical formulations have been advanced which relate an effective time-independent loss rate for soluble species to statistical properties of precipitation such as the wet fraction and length of a precipitation cycle. In this paper, various 'effective' loss rates that have been proposed are compared with the results of detailed time-dependent model calculations carried out over a seasonal time scale. The model is a stochastic precipitation model coupled to a tropospheric photochemical model. The results of numerous time-dependent seasonal model runs are used to derive numerical values for the nitric acid residence time for several assumed sets of preciptation statistics. These values are then compared with the results obtained by utilizing theoretical 'effective' loss rates in time-independent models.
Linking Time and Space Scales in Distributed Hydrological Modelling - a case study for the VIC model
NASA Astrophysics Data System (ADS)
Melsen, Lieke; Teuling, Adriaan; Torfs, Paul; Zappa, Massimiliano; Mizukami, Naoki; Clark, Martyn; Uijlenhoet, Remko
2015-04-01
One of the famous paradoxes of the Greek philosopher Zeno of Elea (~450 BC) is the one with the arrow: If one shoots an arrow, and cuts its motion into such small time steps that at every step the arrow is standing still, the arrow is motionless, because a concatenation of non-moving parts does not create motion. Nowadays, this reasoning can be refuted easily, because we know that motion is a change in space over time, which thus by definition depends on both time and space. If one disregards time by cutting it into infinite small steps, motion is also excluded. This example shows that time and space are linked and therefore hard to evaluate separately. As hydrologists we want to understand and predict the motion of water, which means we have to look both in space and in time. In hydrological models we can account for space by using spatially explicit models. With increasing computational power and increased data availability from e.g. satellites, it has become easier to apply models at a higher spatial resolution. Increasing the resolution of hydrological models is also labelled as one of the 'Grand Challenges' in hydrology by Wood et al. (2011) and Bierkens et al. (2014), who call for global modelling at hyperresolution (~1 km and smaller). A literature survey on 242 peer-viewed articles in which the Variable Infiltration Capacity (VIC) model was used, showed that the spatial resolution at which the model is applied has decreased over the past 17 years: From 0.5 to 2 degrees when the model was just developed, to 1/8 and even 1/32 degree nowadays. On the other hand the literature survey showed that the time step at which the model is calibrated and/or validated remained the same over the last 17 years; mainly daily or monthly. Klemeš (1983) stresses the fact that space and time scales are connected, and therefore downscaling the spatial scale would also imply downscaling of the temporal scale. Is it worth the effort of downscaling your model from 1 degree to 1/24 degree, if in the end you only look at monthly runoff? In this study an attempt is made to link time and space scales in the VIC model, to study the added value of a higher spatial resolution-model for different time steps. In order to do this, four different VIC models were constructed for the Thur basin in North-Eastern Switzerland (1700 km²), a tributary of the Rhine: one lumped model, and three spatially distributed models with a resolution of respectively 1x1 km, 5x5 km, and 10x10 km. All models are run at an hourly time step and aggregated and calibrated for different time steps (hourly, daily, monthly, yearly) using a novel Hierarchical Latin Hypercube Sampling Technique (Vořechovský, 2014). For each time and space scale, several diagnostics like Nash-Sutcliffe efficiency, Kling-Gupta efficiency, all the quantiles of the discharge etc., are calculated in order to compare model performance over different time and space scales for extreme events like floods and droughts. Next to that, the effect of time and space scale on the parameter distribution can be studied. In the end we hope to find a link for optimal time and space scale combinations.
Sensitivity analysis of machine-learning models of hydrologic time series
NASA Astrophysics Data System (ADS)
O'Reilly, A. M.
2017-12-01
Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.
Coiera, Enrico; Wang, Ying; Magrabi, Farah; Concha, Oscar Perez; Gallego, Blanca; Runciman, William
2014-05-21
Current prognostic models factor in patient and disease specific variables but do not consider cumulative risks of hospitalization over time. We developed risk models of the likelihood of death associated with cumulative exposure to hospitalization, based on time-varying risks of hospitalization over any given day, as well as day of the week. Model performance was evaluated alone, and in combination with simple disease-specific models. Patients admitted between 2000 and 2006 from 501 public and private hospitals in NSW, Australia were used for training and 2007 data for evaluation. The impact of hospital care delivered over different days of the week and or times of the day was modeled by separating hospitalization risk into 21 separate time periods (morning, day, night across the days of the week). Three models were developed to predict death up to 7-days post-discharge: 1/a simple background risk model using age, gender; 2/a time-varying risk model for exposure to hospitalization (admission time, days in hospital); 3/disease specific models (Charlson co-morbidity index, DRG). Combining these three generated a full model. Models were evaluated by accuracy, AUC, Akaike and Bayesian information criteria. There was a clear diurnal rhythm to hospital mortality in the data set, peaking in the evening, as well as the well-known 'weekend-effect' where mortality peaks with weekend admissions. Individual models had modest performance on the test data set (AUC 0.71, 0.79 and 0.79 respectively). The combined model which included time-varying risk however yielded an average AUC of 0.92. This model performed best for stays up to 7-days (93% of admissions), peaking at days 3 to 5 (AUC 0.94). Risks of hospitalization vary not just with the day of the week but also time of the day, and can be used to make predictions about the cumulative risk of death associated with an individual's hospitalization. Combining disease specific models with such time varying- estimates appears to result in robust predictive performance. Such risk exposure models should find utility both in enhancing standard prognostic models as well as estimating the risk of continuation of hospitalization.
Family-supportive work environments and psychological strain: a longitudinal test of two theories.
Odle-Dusseau, Heather N; Herleman, Hailey A; Britt, Thomas W; Moore, Dewayne D; Castro, Carl A; McGurk, Dennis
2013-01-01
Based on the Job Demands-Resources (JDR) model (E. Demerouti, A. B. Bakker, F. Nachreiner, & W. B. Schaufeli, 2001, The job demands-resources model of burnout. Journal of Applied Psychology, 86, 499-512) and Conservation of Resources (COR) theory (S. E. Hobfoll, 2002, Social and psychological resources and adaptation. Review of General Psychology, 6, 307-324), we tested three competing models that predict different directions of causation for relationships over time between family-supportive work environments (FSWE) and psychological strain, with two waves of data from a military sample. Results revealed support for both the JDR and COR theories, first in the static model where FSWE at Time 1 predicted psychological strain at Time 2 and when testing the opposite direction, where psychological strain at Time 1 predicted FSWE at Time 2. For change models, FSWE predicted changes in psychological strain across time, although the reverse causation model was not supported (psychological strain at Time 1 did not predict changes in FSWE). Also, changes in FSWE across time predicted psychological strain at Time 2, whereas changes in psychological strain did not predict FSWE at Time 2. Theoretically, these results are important for the work-family interface in that they demonstrate the application of a systems approach to studying work and family interactions, as support was obtained for both the JDR model with perceptions of FSWE predicting psychological strain (in both the static and change models), and for COR theory where psychological strain predicts FSWE across time.
NASA Technical Reports Server (NTRS)
Eckhardt, D. E., Jr.
1979-01-01
A model of a central processor (CPU) which services background applications in the presence of time critical activity is presented. The CPU is viewed as an M/M/1 queueing system subject to periodic interrupts by deterministic, time critical process. The Laplace transform of the distribution of service times for the background applications is developed. The use of state of the art queueing models for studying the background processing capability of time critical computer systems is discussed and the results of a model validation study which support this application of queueing models are presented.
Reverse time migration by Krylov subspace reduced order modeling
NASA Astrophysics Data System (ADS)
Basir, Hadi Mahdavi; Javaherian, Abdolrahim; Shomali, Zaher Hossein; Firouz-Abadi, Roohollah Dehghani; Gholamy, Shaban Ali
2018-04-01
Imaging is a key step in seismic data processing. To date, a myriad of advanced pre-stack depth migration approaches have been developed; however, reverse time migration (RTM) is still considered as the high-end imaging algorithm. The main limitations associated with the performance cost of reverse time migration are the intensive computation of the forward and backward simulations, time consumption, and memory allocation related to imaging condition. Based on the reduced order modeling, we proposed an algorithm, which can be adapted to all the aforementioned factors. Our proposed method benefit from Krylov subspaces method to compute certain mode shapes of the velocity model computed by as an orthogonal base of reduced order modeling. Reverse time migration by reduced order modeling is helpful concerning the highly parallel computation and strongly reduces the memory requirement of reverse time migration. The synthetic model results showed that suggested method can decrease the computational costs of reverse time migration by several orders of magnitudes, compared with reverse time migration by finite element method.
Transient modeling in simulation of hospital operations for emergency response.
Paul, Jomon Aliyas; George, Santhosh K; Yi, Pengfei; Lin, Li
2006-01-01
Rapid estimates of hospital capacity after an event that may cause a disaster can assist disaster-relief efforts. Due to the dynamics of hospitals, following such an event, it is necessary to accurately model the behavior of the system. A transient modeling approach using simulation and exponential functions is presented, along with its applications in an earthquake situation. The parameters of the exponential model are regressed using outputs from designed simulation experiments. The developed model is capable of representing transient, patient waiting times during a disaster. Most importantly, the modeling approach allows real-time capacity estimation of hospitals of various sizes and capabilities. Further, this research is an analysis of the effects of priority-based routing of patients within the hospital and the effects on patient waiting times determined using various patient mixes. The model guides the patients based on the severity of injuries and queues the patients requiring critical care depending on their remaining survivability time. The model also accounts the impact of prehospital transport time on patient waiting time.
NASA Astrophysics Data System (ADS)
Kurniati, Devi; Hoyyi, Abdul; Widiharih, Tatik
2018-05-01
Time series data is a series of data taken or measured based on observations at the same time interval. Time series data analysis is used to perform data analysis considering the effect of time. The purpose of time series analysis is to know the characteristics and patterns of a data and predict a data value in some future period based on data in the past. One of the forecasting methods used for time series data is the state space model. This study discusses the modeling and forecasting of electric energy consumption using the state space model for univariate data. The modeling stage is began with optimal Autoregressive (AR) order selection, determination of state vector through canonical correlation analysis, estimation of parameter, and forecasting. The result of this research shows that modeling of electric energy consumption using state space model of order 4 with Mean Absolute Percentage Error (MAPE) value 3.655%, so the model is very good forecasting category.
2014-01-01
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated. PMID:24672351
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Modelling road accidents: An approach using structural time series
NASA Astrophysics Data System (ADS)
Junus, Noor Wahida Md; Ismail, Mohd Tahir
2014-09-01
In this paper, the trend of road accidents in Malaysia for the years 2001 until 2012 was modelled using a structural time series approach. The structural time series model was identified using a stepwise method, and the residuals for each model were tested. The best-fitted model was chosen based on the smallest Akaike Information Criterion (AIC) and prediction error variance. In order to check the quality of the model, a data validation procedure was performed by predicting the monthly number of road accidents for the year 2012. Results indicate that the best specification of the structural time series model to represent road accidents is the local level with a seasonal model.
An Integrated Modeling and Simulation Methodology for Intelligent Systems Design and Testing
2002-08-01
simulation and actual execution. KEYWORDS: Model Continuity, Modeling, Simulation, Experimental Frame, Real Time Systems , Intelligent Systems...the methodology for a stand-alone real time system. Then it will scale up to distributed real time systems . For both systems, step-wise simulation...MODEL CONTINUITY Intelligent real time systems monitor, respond to, or control, an external environment. This environment is connected to the digital
Simulation model of a gear synchronisation unit for application in a real-time HiL environment
NASA Astrophysics Data System (ADS)
Kirchner, Markus; Eberhard, Peter
2017-05-01
Gear shifting simulations using the multibody system approach and the finite-element method are standard in the development of transmissions. However, the corresponding models are typically large due to the complex geometries and numerous contacts, which causes long calculation times. The present work sets itself apart from these detailed shifting simulations by proposing a much simpler but powerful synchronisation model which can be computed in real-time while it is still more realistic than a pure rigid multibody model. Therefore, the model is even used as part of a Hardware-in-the-Loop (HiL) test rig. The proposed real-time capable synchronization model combines the rigid multibody system approach with a multiscale simulation approach. The multibody system approach is suitable for the description of the large motions. The multiscale simulation approach is using also the finite-element method suitable for the analysis of the contact processes. An efficient contact search for the claws of a car transmission synchronisation unit is described in detail which shortens the required calculation time of the model considerably. To further shorten the calculation time, the use of a complex pre-synchronisation model with a nonlinear contour is presented. The model has to provide realistic results with the time-step size of the HiL test rig. To reach this specification, a particularly adapted multirate method for the synchronisation model is shown. Measured results of test rigs of the real-time capable synchronisation model are verified on plausibility. The simulation model is then also used in the HiL test rig for a transmission control unit.
Partition-based discrete-time quantum walks
NASA Astrophysics Data System (ADS)
Konno, Norio; Portugal, Renato; Sato, Iwao; Segawa, Etsuo
2018-04-01
We introduce a family of discrete-time quantum walks, called two-partition model, based on two equivalence-class partitions of the computational basis, which establish the notion of local dynamics. This family encompasses most versions of unitary discrete-time quantum walks driven by two local operators studied in literature, such as the coined model, Szegedy's model, and the 2-tessellable staggered model. We also analyze the connection of those models with the two-step coined model, which is driven by the square of the evolution operator of the standard discrete-time coined walk. We prove formally that the two-step coined model, an extension of Szegedy model for multigraphs, and the two-tessellable staggered model are unitarily equivalent. Then, selecting one specific model among those families is a matter of taste not generality.
P.C. Stoy; M.C. Dietze; A.D. Richardson; R. Vargas; A.G. Barr; R.S. Anderson; M.A. Arain; I.T. Baker; T.A. Black; J.M. Chen; R.B. Cook; C.M. Gough; R.F. Grant; D.Y. Hollinger; R.C. Izaurralde; C.J. Kucharik; P. Lafleur; B.E. Law; S. Liu; E. Lokupitiya; Y. Luo; J. W. Munger; C. Peng; B. Poulter; D.T. Price; D. M. Ricciuto; W. J. Riley; A. K. Sahoo; K. Schaefer; C.R. Schwalm; H. Tian; H. Verbeeck; E. Weng
2013-01-01
Earth system processes exhibit complex patterns across time, as do the models that seek to replicate these processes. Model output may or may not be significantly related to observations at different times and on different frequencies. Conventional model diagnostics provide an aggregate view of model-data agreement, but usually do not identify the time and frequency...
Automated time activity classification based on global positioning system (GPS) tracking data
2011-01-01
Background Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. Methods We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Results Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Conclusions Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns. PMID:22082316
Automated time activity classification based on global positioning system (GPS) tracking data.
Wu, Jun; Jiang, Chengsheng; Houston, Douglas; Baker, Dean; Delfino, Ralph
2011-11-14
Air pollution epidemiological studies are increasingly using global positioning system (GPS) to collect time-location data because they offer continuous tracking, high temporal resolution, and minimum reporting burden for participants. However, substantial uncertainties in the processing and classifying of raw GPS data create challenges for reliably characterizing time activity patterns. We developed and evaluated models to classify people's major time activity patterns from continuous GPS tracking data. We developed and evaluated two automated models to classify major time activity patterns (i.e., indoor, outdoor static, outdoor walking, and in-vehicle travel) based on GPS time activity data collected under free living conditions for 47 participants (N = 131 person-days) from the Harbor Communities Time Location Study (HCTLS) in 2008 and supplemental GPS data collected from three UC-Irvine research staff (N = 21 person-days) in 2010. Time activity patterns used for model development were manually classified by research staff using information from participant GPS recordings, activity logs, and follow-up interviews. We evaluated two models: (a) a rule-based model that developed user-defined rules based on time, speed, and spatial location, and (b) a random forest decision tree model. Indoor, outdoor static, outdoor walking and in-vehicle travel activities accounted for 82.7%, 6.1%, 3.2% and 7.2% of manually-classified time activities in the HCTLS dataset, respectively. The rule-based model classified indoor and in-vehicle travel periods reasonably well (Indoor: sensitivity > 91%, specificity > 80%, and precision > 96%; in-vehicle travel: sensitivity > 71%, specificity > 99%, and precision > 88%), but the performance was moderate for outdoor static and outdoor walking predictions. No striking differences in performance were observed between the rule-based and the random forest models. The random forest model was fast and easy to execute, but was likely less robust than the rule-based model under the condition of biased or poor quality training data. Our models can successfully identify indoor and in-vehicle travel points from the raw GPS data, but challenges remain in developing models to distinguish outdoor static points and walking. Accurate training data are essential in developing reliable models in classifying time-activity patterns.
Average inactivity time model, associated orderings and reliability properties
NASA Astrophysics Data System (ADS)
Kayid, M.; Izadkhah, S.; Abouammoh, A. M.
2018-02-01
In this paper, we introduce and study a new model called 'average inactivity time model'. This new model is specifically applicable to handle the heterogeneity of the time of the failure of a system in which some inactive items exist. We provide some bounds for the mean average inactivity time of a lifespan unit. In addition, we discuss some dependence structures between the average variable and the mixing variable in the model when original random variable possesses some aging behaviors. Based on the conception of the new model, we introduce and study a new stochastic order. Finally, to illustrate the concept of the model, some interesting reliability problems are reserved.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Fleckenstein, Matthias; Habibi, Saeid
2017-02-01
Model-order reduction and minimization of the CPU run-time while maintaining the model accuracy are critical requirements for real-time implementation of lithium-ion electrochemical battery models. In this paper, an isothermal, continuous, piecewise-linear, electrode-average model is developed by using an optimal knot placement technique. The proposed model reduces the univariate nonlinear function of the electrode's open circuit potential dependence on the state of charge to continuous piecewise regions. The parameterization experiments were chosen to provide a trade-off between extensive experimental characterization techniques and purely identifying all parameters using optimization techniques. The model is then parameterized in each continuous, piecewise-linear, region. Applying the proposed technique cuts down the CPU run-time by around 20%, compared to the reduced-order, electrode-average model. Finally, the model validation against real-time driving profiles (FTP-72, WLTP) demonstrates the ability of the model to predict the cell voltage accurately with less than 2% error.
Clinical time series prediction: Toward a hierarchical dynamical system framework.
Liu, Zitao; Hauskrecht, Milos
2015-09-01
Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. We tested our framework by first learning the time series model from data for the patients in the training set, and then using it to predict future time series values for the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. Copyright © 2014 Elsevier B.V. All rights reserved.
Time-Centric Models For Designing Embedded Cyber-physical Systems
2009-10-09
Time -centric Models For Designing Embedded Cyber- physical Systems John C. Eidson Edward A. Lee Slobodan Matic Sanjit A. Seshia Jia Zou Electrical... Time -centric Models For Designing Embedded Cyber-physical Systems ∗ John C. Eidson , Edward A. Lee, Slobodan Matic, Sanjit A. Seshia, Jia Zou...implementations, such a uniform notion of time cannot be precisely realized. Time triggered networks [10] and time synchronization [9] can be used to
NASA Astrophysics Data System (ADS)
Romano, M.; Mays, M. L.; Taktakishvili, A.; MacNeice, P. J.; Zheng, Y.; Pulkkinen, A. A.; Kuznetsova, M. M.; Odstrcil, D.
2013-12-01
Modeling coronal mass ejections (CMEs) is of great interest to the space weather research and forecasting communities. We present recent validation work of real-time CME arrival time predictions at different satellites using the WSA-ENLIL+Cone three-dimensional MHD heliospheric model available at the Community Coordinated Modeling Center (CCMC) and performed by the Space Weather Research Center (SWRC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. The quality of model operation is evaluated by comparing its output to a measurable parameter of interest such as the CME arrival time and geomagnetic storm strength. The Kp index is calculated from the relation given in Newell et al. (2007), using solar wind parameters predicted by the WSA-ENLIL+Cone model at Earth. The CME arrival time error is defined as the difference between the predicted arrival time and the observed in-situ CME shock arrival time at the ACE, STEREO A, or STEREO B spacecraft. This study includes all real-time WSA-ENLIL+Cone model simulations performed between June 2011-2013 (over 400 runs) at the CCMC/SWRC. We report hit, miss, false alarm, and correct rejection statistics for all three spacecraft. For hits we show the average absolute CME arrival time error, and the dependence of this error on CME input parameters such as speed, width, and direction. We also present the predicted geomagnetic storm strength (using the Kp index) error for Earth-directed CMEs.
Ouma, Paul O; Agutu, Nathan O; Snow, Robert W; Noor, Abdisalan M
2017-09-18
Precise quantification of health service utilisation is important for the estimation of disease burden and allocation of health resources. Current approaches to mapping health facility utilisation rely on spatial accessibility alone as the predictor. However, other spatially varying social, demographic and economic factors may affect the use of health services. The exclusion of these factors can lead to the inaccurate estimation of health facility utilisation. Here, we compare the accuracy of a univariate spatial model, developed only from estimated travel time, to a multivariate model that also includes relevant social, demographic and economic factors. A theoretical surface of travel time to the nearest public health facility was developed. These were assigned to each child reported to have had fever in the Kenya demographic and health survey of 2014 (KDHS 2014). The relationship of child treatment seeking for fever with travel time, household and individual factors from the KDHS2014 were determined using multilevel mixed modelling. Bayesian information criterion (BIC) and likelihood ratio test (LRT) tests were carried out to measure how selected factors improve parsimony and goodness of fit of the time model. Using the mixed model, a univariate spatial model of health facility utilisation was fitted using travel time as the predictor. The mixed model was also used to compute a multivariate spatial model of utilisation, using travel time and modelled surfaces of selected household and individual factors as predictors. The univariate and multivariate spatial models were then compared using the receiver operating area under the curve (AUC) and a percent correct prediction (PCP) test. The best fitting multivariate model had travel time, household wealth index and number of children in household as the predictors. These factors reduced BIC of the time model from 4008 to 2959, a change which was confirmed by the LRT test. Although there was a high correlation of the two modelled probability surfaces (Adj R 2 = 88%), the multivariate model had better AUC compared to the univariate model; 0.83 versus 0.73 and PCP 0.61 versus 0.45 values. Our study shows that a model that uses travel time, as well as household and individual-level socio-demographic factors, results in a more accurate estimation of use of health facilities for the treatment of childhood fever, compared to one that relies on only travel time.
A novel modeling approach to the mixing process in twin-screw extruders
NASA Astrophysics Data System (ADS)
Kennedy, Amedu Osaighe; Penlington, Roger; Busawon, Krishna; Morgan, Andy
2014-05-01
In this paper, a theoretical model for the mixing process in a self-wiping co-rotating twin screw extruder by combination of statistical techniques and mechanistic modelling has been proposed. The approach was to examine the mixing process in the local zones via residence time distribution and the flow dynamics, from which predictive models of the mean residence time and mean time delay were determined. Increase in feed rate at constant screw speed was found to narrow the shape of the residence time distribution curve, reduction in the mean residence time and time delay and increase in the degree of fill. Increase in screw speed at constant feed rate was found to narrow the shape of the residence time distribution curve, decrease in the degree of fill in the extruder and thus an increase in the time delay. Experimental investigation was also done to validate the modeling approach.
Preference as a Function of Active Interresponse Times: A Test of the Active Time Model
ERIC Educational Resources Information Center
Misak, Paul; Cleaveland, J. Mark
2011-01-01
In this article, we describe a test of the active time model for concurrent variable interval (VI) choice. The active time model (ATM) suggests that the time since the most recent response is one of the variables controlling choice in concurrent VI VI schedules of reinforcement. In our experiment, pigeons were trained in a multiple concurrent…
Susan E. Meyer; Phil S. Allen
2009-01-01
A principal goal of seed germination modelling for wild species is to predict germination timing under fluctuating field conditions. We coupled our previously developed hydrothermal time, thermal and hydrothermal afterripening time, and hydration-dehydration models for dormancy loss and germination with field seed zone temperature and water potential measurements from...
Dynamic Factor Analysis Models with Time-Varying Parameters
ERIC Educational Resources Information Center
Chow, Sy-Miin; Zu, Jiyun; Shifren, Kim; Zhang, Guangjian
2011-01-01
Dynamic factor analysis models with time-varying parameters offer a valuable tool for evaluating multivariate time series data with time-varying dynamics and/or measurement properties. We use the Dynamic Model of Activation proposed by Zautra and colleagues (Zautra, Potter, & Reich, 1997) as a motivating example to construct a dynamic factor…
Evaluating Model Fit for Growth Curve Models: Integration of Fit Indices from SEM and MLM Frameworks
ERIC Educational Resources Information Center
Wu, Wei; West, Stephen G.; Taylor, Aaron B.
2009-01-01
Evaluating overall model fit for growth curve models involves 3 challenging issues. (a) Three types of longitudinal data with different implications for model fit may be distinguished: balanced on time with complete data, balanced on time with data missing at random, and unbalanced on time. (b) Traditional work on fit from the structural equation…
A time-dependent probabilistic seismic-hazard model for California
Cramer, C.H.; Petersen, M.D.; Cao, T.; Toppozada, Tousson R.; Reichle, M.
2000-01-01
For the purpose of sensitivity testing and illuminating nonconsensus components of time-dependent models, the California Department of Conservation, Division of Mines and Geology (CDMG) has assembled a time-dependent version of its statewide probabilistic seismic hazard (PSH) model for California. The model incorporates available consensus information from within the earth-science community, except for a few faults or fault segments where consensus information is not available. For these latter faults, published information has been incorporated into the model. As in the 1996 CDMG/U.S. Geological Survey (USGS) model, the time-dependent models incorporate three multisegment ruptures: a 1906, an 1857, and a southern San Andreas earthquake. Sensitivity tests are presented to show the effect on hazard and expected damage estimates of (1) intrinsic (aleatory) sigma, (2) multisegment (cascade) vs. independent segment (no cascade) ruptures, and (3) time-dependence vs. time-independence. Results indicate that (1) differences in hazard and expected damage estimates between time-dependent and independent models increase with decreasing intrinsic sigma, (2) differences in hazard and expected damage estimates between full cascading and not cascading are insensitive to intrinsic sigma, (3) differences in hazard increase with increasing return period (decreasing probability of occurrence), and (4) differences in moment-rate budgets increase with decreasing intrinsic sigma and with the degree of cascading, but are within the expected uncertainty in PSH time-dependent modeling and do not always significantly affect hazard and expected damage estimates.
NASA Astrophysics Data System (ADS)
Amalia, E.; Moelyadi, M. A.; Ihsan, M.
2018-04-01
The flow of air passing around a circular cylinder on the Reynolds number of 250,000 is to show Von Karman Vortex Street Phenomenon. This phenomenon was captured well by using a right turbulence model. In this study, some turbulence models available in software ANSYS Fluent 16.0 was tested to simulate Von Karman vortex street phenomenon, namely k- epsilon, SST k-omega and Reynolds Stress, Detached Eddy Simulation (DES), and Large Eddy Simulation (LES). In addition, it was examined the effect of time step size on the accuracy of CFD simulation. The simulations are carried out by using two-dimensional and three- dimensional models and then compared with experimental data. For two-dimensional model, Von Karman Vortex Street phenomenon was captured successfully by using the SST k-omega turbulence model. As for the three-dimensional model, Von Karman Vortex Street phenomenon was captured by using Reynolds Stress Turbulence Model. The time step size value affects the smoothness quality of curves of drag coefficient over time, as well as affecting the running time of the simulation. The smaller time step size, the better inherent drag coefficient curves produced. Smaller time step size also gives faster computation time.
NASA Astrophysics Data System (ADS)
Engeland, Kolbjorn; Steinsland, Ingelin
2014-05-01
This study introduces a methodology for the construction of probabilistic inflow forecasts for multiple catchments and lead times, and investigates criterions for evaluation of multi-variate forecasts. A post-processing approach is used, and a Gaussian model is applied for transformed variables. The post processing model has two main components, the mean model and the dependency model. The mean model is used to estimate the marginal distributions for forecasted inflow for each catchment and lead time, whereas the dependency models was used to estimate the full multivariate distribution of forecasts, i.e. co-variances between catchments and lead times. In operational situations, it is a straightforward task to use the models to sample inflow ensembles which inherit the dependencies between catchments and lead times. The methodology was tested and demonstrated in the river systems linked to the Ulla-Førre hydropower complex in southern Norway, where simultaneous probabilistic forecasts for five catchments and ten lead times were constructed. The methodology exhibits sufficient flexibility to utilize deterministic flow forecasts from a numerical hydrological model as well as statistical forecasts such as persistent forecasts and sliding window climatology forecasts. It also deals with variation in the relative weights of these forecasts with both catchment and lead time. When evaluating predictive performance in original space using cross validation, the case study found that it is important to include the persistent forecast for the initial lead times and the hydrological forecast for medium-term lead times. Sliding window climatology forecasts become more important for the latest lead times. Furthermore, operationally important features in this case study such as heteroscedasticity, lead time varying between lead time dependency and lead time varying between catchment dependency are captured. Two criterions were used for evaluating the added value of the dependency model. The first one was the Energy score (ES) that is a multi-dimensional generalization of continuous rank probability score (CRPS). ES was calculated for all lead-times and catchments together, for each catchment across all lead times and for each lead time across all catchments. The second criterion was to use CRPS for forecasted inflows accumulated over several lead times and catchments. The results showed that ES was not very sensitive to correct covariance structure, whereas CRPS for accumulated flows where more suitable for evaluating the dependency model. This indicates that it is more appropriate to evaluate relevant univariate variables that depends on the dependency structure then to evaluate the multivariate forecast directly.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2008-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Real-Time Dynamic Modeling - Data Information Requirements and Flight Test Results
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.; Smith, Mark S.
2010-01-01
Practical aspects of identifying dynamic models for aircraft in real time were studied. Topics include formulation of an equation-error method in the frequency domain to estimate non-dimensional stability and control derivatives in real time, data information content for accurate modeling results, and data information management techniques such as data forgetting, incorporating prior information, and optimized excitation. Real-time dynamic modeling was applied to simulation data and flight test data from a modified F-15B fighter aircraft, and to operational flight data from a subscale jet transport aircraft. Estimated parameter standard errors, prediction cases, and comparisons with results from a batch output-error method in the time domain were used to demonstrate the accuracy of the identified real-time models.
Brownian motion model with stochastic parameters for asset prices
NASA Astrophysics Data System (ADS)
Ching, Soo Huei; Hin, Pooi Ah
2013-09-01
The Brownian motion model may not be a completely realistic model for asset prices because in real asset prices the drift μ and volatility σ may change over time. Presently we consider a model in which the parameter x = (μ,σ) is such that its value x (t + Δt) at a short time Δt ahead of the present time t depends on the value of the asset price at time t + Δt as well as the present parameter value x(t) and m-1 other parameter values before time t via a conditional distribution. The Malaysian stock prices are used to compare the performance of the Brownian motion model with fixed parameter with that of the model with stochastic parameter.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
Does the dose-solubility ratio affect the mean dissolution time of drugs?
Lánský, P; Weiss, M
1999-09-01
To present a new model for describing drug dissolution. On the basis of the new model to characterize the dissolution profile by the distribution function of the random dissolution time of a drug molecule, which generalizes the classical first order model. Instead of assuming a constant fractional dissolution rate, as in the classical model, it is considered that the fractional dissolution rate is a decreasing function of the dissolved amount controlled by the dose-solubility ratio. The differential equation derived from this assumption is solved and the distribution measures (half-dissolution time, mean dissolution time, relative dispersion of the dissolution time, dissolution time density, and fractional dissolution rate) are calculated. Finally, instead of monotonically decreasing the fractional dissolution rate, a generalization resulting in zero dissolution rate at time origin is introduced. The behavior of the model is divided into two regions defined by q, the ratio of the dose to the solubility level: q < 1 (complete dissolution of the dose, dissolution time) and q > 1 (saturation of the solution, saturation time). The singular case q = 1 is also treated and in this situation the mean as well as the relative dispersion of the dissolution time increase to infinity. The model was successfully fitted to data (1). This empirical model is descriptive without detailed physical reasoning behind its derivation. According to the model, the mean dissolution time is affected by the dose-solubility ratio. Although this prediction appears to be in accordance with preliminary application, further validation based on more suitable experimental data is required.
Wang, Wei-Qing; Cheng, Hong-Yan; Song, Song-Quan
2013-01-01
Effects of temperature, storage time and their combination on germination of aspen (Populus tomentosa) seeds were investigated. Aspen seeds were germinated at 5 to 30°C at 5°C intervals after storage for a period of time under 28°C and 75% relative humidity. The effect of temperature on aspen seed germination could not be effectively described by the thermal time (TT) model, which underestimated the germination rate at 5°C and poorly predicted the time courses of germination at 10, 20, 25 and 30°C. A modified TT model (MTT) which assumed a two-phased linear relationship between germination rate and temperature was more accurate in predicting the germination rate and percentage and had a higher likelihood of being correct than the TT model. The maximum lifetime threshold (MLT) model accurately described the effect of storage time on seed germination across all the germination temperatures. An aging thermal time (ATT) model combining both the TT and MLT models was developed to describe the effect of both temperature and storage time on seed germination. When the ATT model was applied to germination data across all the temperatures and storage times, it produced a relatively poor fit. Adjusting the ATT model to separately fit germination data at low and high temperatures in the suboptimal range increased the models accuracy for predicting seed germination. Both the MLT and ATT models indicate that germination of aspen seeds have distinct physiological responses to temperature within a suboptimal range. PMID:23658654
Emotional moments across time: a possible neural basis for time perception in the anterior insula
Craig, A.D. (Bud)
2009-01-01
A model of awareness based on interoceptive salience is described, which has an endogenous time base that might provide a basis for the human capacity to perceive and estimate time intervals in the range of seconds to subseconds. The model posits that the neural substrate for awareness across time is located in the anterior insular cortex, which fits with recent functional imaging evidence relevant to awareness and time perception. The time base in this model is adaptive and emotional, and thus it offers an explanation for some aspects of the subjective nature of time perception. This model does not describe the mechanism of the time base, but it suggests a possible relationship with interoceptive afferent activity, such as heartbeat-related inputs. PMID:19487195
Modeling Geodetic Processes with Levy α-Stable Distribution and FARIMA
NASA Astrophysics Data System (ADS)
Montillet, Jean-Philippe; Yu, Kegen
2015-04-01
Over the last years the scientific community has been using the auto regressive moving average (ARMA) model in the modeling of the noise in global positioning system (GPS) time series (daily solution). This work starts with the investigation of the limit of the ARMA model which is widely used in signal processing when the measurement noise is white. Since a typical GPS time series consists of geophysical signals (e.g., seasonal signal) and stochastic processes (e.g., coloured and white noise), the ARMA model may be inappropriate. Therefore, the application of the fractional auto-regressive integrated moving average (FARIMA) model is investigated. The simulation results using simulated time series as well as real GPS time series from a few selected stations around Australia show that the FARIMA model fits the time series better than other models when the coloured noise is larger than the white noise. The second fold of this work focuses on fitting the GPS time series with the family of Levy α-stable distributions. Using this distribution, a hypothesis test is developed to eliminate effectively coarse outliers from GPS time series, achieving better performance than using the rule of thumb of n standard deviations (with n chosen empirically).
Modeling and forecasting of KLCI weekly return using WT-ANN integrated model
NASA Astrophysics Data System (ADS)
Liew, Wei-Thong; Liong, Choong-Yeun; Hussain, Saiful Izzuan; Isa, Zaidi
2013-04-01
The forecasting of weekly return is one of the most challenging tasks in investment since the time series are volatile and non-stationary. In this study, an integrated model of wavelet transform and artificial neural network, WT-ANN is studied for modeling and forecasting of KLCI weekly return. First, the WT is applied to decompose the weekly return time series in order to eliminate noise. Then, a mathematical model of the time series is constructed using the ANN. The performance of the suggested model will be evaluated by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE). The result shows that the WT-ANN model can be considered as a feasible and powerful model for time series modeling and prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Fengbin, E-mail: fblu@amss.ac.cn
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor’s 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relationsmore » evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model.« less
Lu, Fengbin; Qiao, Han; Wang, Shouyang; Lai, Kin Keung; Li, Yuze
2017-01-01
This paper proposes a new time-varying coefficient vector autoregressions (VAR) model, in which the coefficient is a linear function of dynamic lagged correlation. The proposed model allows for flexibility in choices of dynamic correlation models (e.g. dynamic conditional correlation generalized autoregressive conditional heteroskedasticity (GARCH) models, Markov-switching GARCH models and multivariate stochastic volatility models), which indicates that it can describe many types of time-varying causal effects. Time-varying causal relations between West Texas Intermediate (WTI) crude oil and the US Standard and Poor's 500 (S&P 500) stock markets are examined by the proposed model. The empirical results show that their causal relations evolve with time and display complex characters. Both positive and negative causal effects of the WTI on the S&P 500 in the subperiods have been found and confirmed by the traditional VAR models. Similar results have been obtained in the causal effects of S&P 500 on WTI. In addition, the proposed model outperforms the traditional VAR model. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Effects of Time Advance Mechanism on Simple Agent Behaviors in Combat Simulations
2011-12-01
modeling packages that illustrate the differences between discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat... DES ) models , often referred to as “next-event” (Law and Kelton 2000) or discrete time simulation (DTS), commonly referred to as “time-step.” DTS...discrete-time simulation (DTS) and discrete-event simulation ( DES ) methodologies. Many combat models use DTS as their simulation time advance mechanism
Dynamical initial-state model for relativistic heavy-ion collisions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Chun; Schenke, Bjorn
We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less
Dynamical initial-state model for relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Shen, Chun; Schenke, Björn
2018-02-01
We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy-ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a fluctuating time depending on sampled final rapidities. Energy is deposited in space time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directly from the initial-state model, including net-baryon rapidity distributions, two-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. We also present the implementation of the model with 3+1-dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial-state model at proper times greater than the initial time for the hydrodynamic simulation.
Dynamical initial-state model for relativistic heavy-ion collisions
Shen, Chun; Schenke, Bjorn
2018-02-15
We present a fully three-dimensional model providing initial conditions for energy and net-baryon density distributions in heavy ion collisions at arbitrary collision energy. The model includes the dynamical deceleration of participating nucleons or valence quarks, depending on the implementation. The duration of the deceleration continues until the string spanned between colliding participants is assumed to thermalize, which is either after a fixed proper time, or a uctuating time depending on sampled final rapidities. Energy is deposited in space-time along the string, which in general will span a range of space-time rapidities and proper times. We study various observables obtained directlymore » from the initial state model, including net-baryon rapidity distributions, 2-particle rapidity correlations, as well as the rapidity decorrelation of the transverse geometry. Their dependence on the model implementation and parameter values is investigated. Here, we also present the implementation of the model with 3+1 dimensional hydrodynamics, which involves the addition of source terms that deposit energy and net-baryon densities produced by the initial state model at proper times greater than the initial time for the hydrodynamic simulation.« less
Bayesian spatiotemporal crash frequency models with mixture components for space-time interactions.
Cheng, Wen; Gill, Gurdiljot Singh; Zhang, Yongping; Cao, Zhong
2018-03-01
The traffic safety research has developed spatiotemporal models to explore the variations in the spatial pattern of crash risk over time. Many studies observed notable benefits associated with the inclusion of spatial and temporal correlation and their interactions. However, the safety literature lacks sufficient research for the comparison of different temporal treatments and their interaction with spatial component. This study developed four spatiotemporal models with varying complexity due to the different temporal treatments such as (I) linear time trend; (II) quadratic time trend; (III) Autoregressive-1 (AR-1); and (IV) time adjacency. Moreover, the study introduced a flexible two-component mixture for the space-time interaction which allows greater flexibility compared to the traditional linear space-time interaction. The mixture component allows the accommodation of global space-time interaction as well as the departures from the overall spatial and temporal risk patterns. This study performed a comprehensive assessment of mixture models based on the diverse criteria pertaining to goodness-of-fit, cross-validation and evaluation based on in-sample data for predictive accuracy of crash estimates. The assessment of model performance in terms of goodness-of-fit clearly established the superiority of the time-adjacency specification which was evidently more complex due to the addition of information borrowed from neighboring years, but this addition of parameters allowed significant advantage at posterior deviance which subsequently benefited overall fit to crash data. The Base models were also developed to study the comparison between the proposed mixture and traditional space-time components for each temporal model. The mixture models consistently outperformed the corresponding Base models due to the advantages of much lower deviance. For cross-validation comparison of predictive accuracy, linear time trend model was adjudged the best as it recorded the highest value of log pseudo marginal likelihood (LPML). Four other evaluation criteria were considered for typical validation using the same data for model development. Under each criterion, observed crash counts were compared with three types of data containing Bayesian estimated, normal predicted, and model replicated ones. The linear model again performed the best in most scenarios except one case of using model replicated data and two cases involving prediction without including random effects. These phenomena indicated the mediocre performance of linear trend when random effects were excluded for evaluation. This might be due to the flexible mixture space-time interaction which can efficiently absorb the residual variability escaping from the predictable part of the model. The comparison of Base and mixture models in terms of prediction accuracy further bolstered the superiority of the mixture models as the mixture ones generated more precise estimated crash counts across all four models, suggesting that the advantages associated with mixture component at model fit were transferable to prediction accuracy. Finally, the residual analysis demonstrated the consistently superior performance of random effect models which validates the importance of incorporating the correlation structures to account for unobserved heterogeneity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Two-dimensional model of resonant electron collisions with diatomic molecules and molecular cations
NASA Astrophysics Data System (ADS)
Vana, Martin; Hvizdos, David; Houfek, Karel; Curik, Roman; Greene, Chris H.; Rescigno, Thomas N.; McCurdy, C. William
2016-05-01
A simple model for resonant collisions of electrons with diatomic molecules with one electronic and one nuclear degree of freedom (2D model) which was solved numerically exactly within the time-independent approach was used to probe the local complex potential approximation and nonlocal approximation to nuclear dynamics of these collisions. This model was reformulated in the time-dependent picture and extended to model also electron collisions with molecular cations, especially with H2+.This model enables an assessment of approximate methods, such as the boomerang model or the frame transformation theory. We will present both time-dependent and time-independent results and show how we can use the model to extract deeper insight into the dynamics of the resonant collisions.
Modeling Nonstationary Emotion Dynamics in Dyads using a Time-Varying Vector-Autoregressive Model.
Bringmann, Laura F; Ferrer, Emilio; Hamaker, Ellen L; Borsboom, Denny; Tuerlinckx, Francis
2018-01-01
Emotion dynamics are likely to arise in an interpersonal context. Standard methods to study emotions in interpersonal interaction are limited because stationarity is assumed. This means that the dynamics, for example, time-lagged relations, are invariant across time periods. However, this is generally an unrealistic assumption. Whether caused by an external (e.g., divorce) or an internal (e.g., rumination) event, emotion dynamics are prone to change. The semi-parametric time-varying vector-autoregressive (TV-VAR) model is based on well-studied generalized additive models, implemented in the software R. The TV-VAR can explicitly model changes in temporal dependency without pre-existing knowledge about the nature of change. A simulation study is presented, showing that the TV-VAR model is superior to the standard time-invariant VAR model when the dynamics change over time. The TV-VAR model is applied to empirical data on daily feelings of positive affect (PA) from a single couple. Our analyses indicate reliable changes in the male's emotion dynamics over time, but not in the female's-which were not predicted by her own affect or that of her partner. This application illustrates the usefulness of using a TV-VAR model to detect changes in the dynamics in a system.
Analyzing developmental processes on an individual level using nonstationary time series modeling.
Molenaar, Peter C M; Sinclair, Katerina O; Rovine, Michael J; Ram, Nilam; Corneal, Sherry E
2009-01-01
Individuals change over time, often in complex ways. Generally, studies of change over time have combined individuals into groups for analysis, which is inappropriate in most, if not all, studies of development. The authors explain how to identify appropriate levels of analysis (individual vs. group) and demonstrate how to estimate changes in developmental processes over time using a multivariate nonstationary time series model. They apply this model to describe the changing relationships between a biological son and father and a stepson and stepfather at the individual level. The authors also explain how to use an extended Kalman filter with iteration and smoothing estimator to capture how dynamics change over time. Finally, they suggest further applications of the multivariate nonstationary time series model and detail the next steps in the development of statistical models used to analyze individual-level data.
Joint Models of Longitudinal and Time-to-Event Data with More Than One Event Time Outcome: A Review.
Hickey, Graeme L; Philipson, Pete; Jorgensen, Andrea; Kolamunnage-Dona, Ruwanthi
2018-01-31
Methodological development and clinical application of joint models of longitudinal and time-to-event outcomes have grown substantially over the past two decades. However, much of this research has concentrated on a single longitudinal outcome and a single event time outcome. In clinical and public health research, patients who are followed up over time may often experience multiple, recurrent, or a succession of clinical events. Models that utilise such multivariate event time outcomes are quite valuable in clinical decision-making. We comprehensively review the literature for implementation of joint models involving more than a single event time per subject. We consider the distributional and modelling assumptions, including the association structure, estimation approaches, software implementations, and clinical applications. Research into this area is proving highly promising, but to-date remains in its infancy.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Immortal time bias in observational studies of time-to-event outcomes.
Jones, Mark; Fowler, Robert
2016-12-01
The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.
Characterization of Cement Thickening Time Properties and Modeling of Thickening Time
NASA Astrophysics Data System (ADS)
Coryell, Tyler Neil
A comprehensive way of modelling cement thickening time, as applied in the oil field, has never been created which incorporates all the properties internal to the cement design. To address this issue different variables were tested for; including barite particle size, Hydroxyethylcellulose (HEC) concentration, age or exposure of the cement to humidity, downhole temperature, and the particle size of the cement. Barite particle size was shown to have no significant effect on thickening time. Age of the sample was also shown to have no significant effect on thickening time, at least for our storage conditions in the laboratory. The testing for nano cement particles currently shows that there is the possibility that the smaller particles can increase thickening time. While such a result is not absent from other works, it is unusual. Due to the lack of conclusive evidence for nano particle cement, the work as it currently stands is included but not taken it into consideration for our models. The temperature downhole and the HEC concentration are used to create our models. With this research, it is shown that creating a numerical model is a practical investment in our future understanding of cement’s field use. Three model systems are used, the first uses equations for predicting the time when thickening first begins and the thickness at that time. In the second equation set, the rate of change that can be expected is used to find curvature to define the acceleration. The third model improves on some scatter that could not be controlled in the second model by using the first derivative to find the point of maximum slope and the time it occurs. By using this maximum slope point, the ‘pumpable’ time of the cement before it thickens can be estimated. All the models can be used in tandem to describe the cement thickening process. However, the most accurate system is using the first model with the third model, i.e. using the direct model for when acceleration begins and the first derivative model to find the end of the thickening time. All the models can be extended in future work to include a broader test matrix and can be extended to include other chemical additives for the base cement.
Milne, R K; Yeo, G F; Edeson, R O; Madsen, B W
1988-04-22
Stochastic models of ion channels have been based largely on Markov theory where individual states and transition rates must be specified, and sojourn-time densities for each state are constrained to be exponential. This study presents an approach based on random-sum methods and alternating-renewal theory, allowing individual states to be grouped into classes provided the successive sojourn times in a given class are independent and identically distributed. Under these conditions Markov models form a special case. The utility of the approach is illustrated by considering the effects of limited time resolution (modelled by using a discrete detection limit, xi) on the properties of observable events, with emphasis on the observed open-time (xi-open-time). The cumulants and Laplace transform for a xi-open-time are derived for a range of Markov and non-Markov models; several useful approximations to the xi-open-time density function are presented. Numerical studies show that the effects of limited time resolution can be extreme, and also highlight the relative importance of the various model parameters. The theory could form a basis for future inferential studies in which parameter estimation takes account of limited time resolution in single channel records. Appendixes include relevant results concerning random sums and a discussion of the role of exponential distributions in Markov models.
Nobre, Aline Araújo; Carvalho, Marilia Sá; Griep, Rosane Härter; Fonseca, Maria de Jesus Mendes da; Melo, Enirtes Caetano Prates; Santos, Itamar de Souza; Chor, Dora
2017-08-17
To compare two methodological approaches: the multinomial model and the zero-inflated gamma model, evaluating the factors associated with the practice and amount of time spent on leisure time physical activity. Data collected from 14,823 baseline participants in the Longitudinal Study of Adult Health (ELSA-Brasil - Estudo Longitudinal de Saúde do Adulto ) have been analysed. Regular leisure time physical activity has been measured using the leisure time physical activity module of the International Physical Activity Questionnaire. The explanatory variables considered were gender, age, education level, and annual per capita family income. The main advantage of the zero-inflated gamma model over the multinomial model is that it estimates mean time (minutes per week) spent on leisure time physical activity. For example, on average, men spent 28 minutes/week longer on leisure time physical activity than women did. The most sedentary groups were young women with low education level and income. The zero-inflated gamma model, which is rarely used in epidemiological studies, can give more appropriate answers in several situations. In our case, we have obtained important information on the main determinants of the duration of leisure time physical activity. This information can help guide efforts towards the most vulnerable groups since physical inactivity is associated with different diseases and even premature death.
Neal, Andrew; Kwantes, Peter J
2009-04-01
The aim of this article is to develop a formal model of conflict detection performance. Our model assumes that participants iteratively sample evidence regarding the state of the world and accumulate it over time. A decision is made when the evidence reaches a threshold that changes over time in response to the increasing urgency of the task. Two experiments were conducted to examine the effects of conflict geometry and timing on response proportions and response time. The model is able to predict the observed pattern of response times, including a nonmonotonic relationship between distance at point of closest approach and response time, as well as effects of angle of approach and relative velocity. The results demonstrate that evidence accumulation models provide a good account of performance on a conflict detection task. Evidence accumulation models are a form of dynamic signal detection theory, allowing for the analysis of response times as well as response proportions, and can be used for simulating human performance on dynamic decision tasks.
Network Reduction Algorithm for Developing Distribution Feeders for Real-Time Simulators: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagarajan, Adarsh; Nelson, Austin; Prabakar, Kumaraguru
As advanced grid-support functions (AGF) become more widely used in grid-connected photovoltaic (PV) inverters, utilities are increasingly interested in their impacts when implemented in the field. These effects can be understood by modeling feeders in real-time systems and testing PV inverters using power hardware-in-the-loop (PHIL) techniques. This paper presents a novel feeder model reduction algorithm using a Monte Carlo method that enables large feeders to be solved and operated on real-time computing platforms. Two Hawaiian Electric feeder models in Synergi Electric's load flow software were converted to reduced order models in OpenDSS, and subsequently implemented in the OPAL-RT real-time digitalmore » testing platform. Smart PV inverters were added to the real-time model with AGF responses modeled after characterizing commercially available hardware inverters. Finally, hardware inverters were tested in conjunction with the real-time model using PHIL techniques so that the effects of AGFs on the choice feeders could be analyzed.« less
Luo, Yi; Zhang, Tao; Li, Xiao-song
2016-05-01
To explore the application of fuzzy time series model based on fuzzy c-means clustering in forecasting monthly incidence of Hepatitis E in mainland China. Apredictive model (fuzzy time series method based on fuzzy c-means clustering) was developed using Hepatitis E incidence data in mainland China between January 2004 and July 2014. The incidence datafrom August 2014 to November 2014 were used to test the fitness of the predictive model. The forecasting results were compared with those resulted from traditional fuzzy time series models. The fuzzy time series model based on fuzzy c-means clustering had 0.001 1 mean squared error (MSE) of fitting and 6.977 5 x 10⁻⁴ MSE of forecasting, compared with 0.0017 and 0.0014 from the traditional forecasting model. The results indicate that the fuzzy time series model based on fuzzy c-means clustering has a better performance in forecasting incidence of Hepatitis E.
Fast and accurate Monte Carlo sampling of first-passage times from Wiener diffusion models.
Drugowitsch, Jan
2016-02-11
We present a new, fast approach for drawing boundary crossing samples from Wiener diffusion models. Diffusion models are widely applied to model choices and reaction times in two-choice decisions. Samples from these models can be used to simulate the choices and reaction times they predict. These samples, in turn, can be utilized to adjust the models' parameters to match observed behavior from humans and other animals. Usually, such samples are drawn by simulating a stochastic differential equation in discrete time steps, which is slow and leads to biases in the reaction time estimates. Our method, instead, facilitates known expressions for first-passage time densities, which results in unbiased, exact samples and a hundred to thousand-fold speed increase in typical situations. In its most basic form it is restricted to diffusion models with symmetric boundaries and non-leaky accumulation, but our approach can be extended to also handle asymmetric boundaries or to approximate leaky accumulation.
Clinical time series prediction: towards a hierarchical dynamical system framework
Liu, Zitao; Hauskrecht, Milos
2014-01-01
Objective Developing machine learning and data mining algorithms for building temporal models of clinical time series is important for understanding of the patient condition, the dynamics of a disease, effect of various patient management interventions and clinical decision making. In this work, we propose and develop a novel hierarchical framework for modeling clinical time series data of varied length and with irregularly sampled observations. Materials and methods Our hierarchical dynamical system framework for modeling clinical time series combines advantages of the two temporal modeling approaches: the linear dynamical system and the Gaussian process. We model the irregularly sampled clinical time series by using multiple Gaussian process sequences in the lower level of our hierarchical framework and capture the transitions between Gaussian processes by utilizing the linear dynamical system. The experiments are conducted on the complete blood count (CBC) panel data of 1000 post-surgical cardiac patients during their hospitalization. Our framework is evaluated and compared to multiple baseline approaches in terms of the mean absolute prediction error and the absolute percentage error. Results We tested our framework by first learning the time series model from data for the patient in the training set, and then applying the model in order to predict future time series values on the patients in the test set. We show that our model outperforms multiple existing models in terms of its predictive accuracy. Our method achieved a 3.13% average prediction accuracy improvement on ten CBC lab time series when it was compared against the best performing baseline. A 5.25% average accuracy improvement was observed when only short-term predictions were considered. Conclusion A new hierarchical dynamical system framework that lets us model irregularly sampled time series data is a promising new direction for modeling clinical time series and for improving their predictive performance. PMID:25534671
Airport Facility Queuing Model Validation
DOT National Transportation Integrated Search
1977-05-01
Criteria are presented for selection of analytic models to represent waiting times due to queuing processes. An existing computer model by M.F. Neuts which assumes general nonparametric distributions of arrivals per unit time and service times for a ...
Evaluation of nonlinearity and validity of nonlinear modeling for complex time series.
Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo
2007-10-01
Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.
Evaluation of nonlinearity and validity of nonlinear modeling for complex time series
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya; Ikeguchi, Tohru; Suzuki, Masuo
2007-10-01
Even if an original time series exhibits nonlinearity, it is not always effective to approximate the time series by a nonlinear model because such nonlinear models have high complexity from the viewpoint of information criteria. Therefore, we propose two measures to evaluate both the nonlinearity of a time series and validity of nonlinear modeling applied to it by nonlinear predictability and information criteria. Through numerical simulations, we confirm that the proposed measures effectively detect the nonlinearity of an observed time series and evaluate the validity of the nonlinear model. The measures are also robust against observational noises. We also analyze some real time series: the difference of the number of chickenpox and measles patients, the number of sunspots, five Japanese vowels, and the chaotic laser. We can confirm that the nonlinear model is effective for the Japanese vowel /a/, the difference of the number of measles patients, and the chaotic laser.
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
New Models for Forecasting Enrollments: Fuzzy Time Series and Neural Network Approaches.
ERIC Educational Resources Information Center
Song, Qiang; Chissom, Brad S.
Since university enrollment forecasting is very important, many different methods and models have been proposed by researchers. Two new methods for enrollment forecasting are introduced: (1) the fuzzy time series model; and (2) the artificial neural networks model. Fuzzy time series has been proposed to deal with forecasting problems within a…
Applying Kaplan-Meier to Item Response Data
ERIC Educational Resources Information Center
McNeish, Daniel
2018-01-01
Some IRT models can be equivalently modeled in alternative frameworks such as logistic regression. Logistic regression can also model time-to-event data, which concerns the probability of an event occurring over time. Using the relation between time-to-event models and logistic regression and the relation between logistic regression and IRT, this…
Hierarchical Diffusion Models for Two-Choice Response Times
ERIC Educational Resources Information Center
Vandekerckhove, Joachim; Tuerlinckx, Francis; Lee, Michael D.
2011-01-01
Two-choice response times are a common type of data, and much research has been devoted to the development of process models for such data. However, the practical application of these models is notoriously complicated, and flexible methods are largely nonexistent. We combine a popular model for choice response times--the Wiener diffusion…
Modelling endurance and resumption times for repetitive one-hand pushing.
Rose, Linda M; Beauchemin, Catherine A A; Neumann, W Patrick
2018-07-01
This study's objective was to develop models of endurance time (ET), as a function of load level (LL), and of resumption time (RT) after loading as a function of both LL and loading time (LT) for repeated loadings. Ten male participants with experience in construction work each performed 15 different one-handed repetaed pushing tasks at shoulder height with varied exerted force and duration. These data were used to create regression models predicting ET and RT. It is concluded that power law relationships are most appropriate to use when modelling ET and RT. While the data the equations are based on are limited regarding number of participants, gender, postures, magnitude and type of exerted force, the paper suggests how this kind of modelling can be used in job design and in further research. Practitioner Summary: Adequate muscular recovery during work-shifts is important to create sustainable jobs. This paper describes mathematical modelling and presents models for endurance times and resumption times (an aspect of recovery need), based on data from an empirical study. The models can be used to help manage fatigue levels in job design.
NASA Astrophysics Data System (ADS)
Yang, Q.; Wang, Y.; Zhang, J.; Delgado, J.
2017-05-01
Accurate and reliable groundwater level forecasting models can help ensure the sustainable use of a watershed's aquifers for urban and rural water supply. In this paper, three time series analysis methods, Holt-Winters (HW), integrated time series (ITS), and seasonal autoregressive integrated moving average (SARIMA), are explored to simulate the groundwater level in a coastal aquifer, China. The monthly groundwater table depth data collected in a long time series from 2000 to 2011 are simulated and compared with those three time series models. The error criteria are estimated using coefficient of determination ( R 2), Nash-Sutcliffe model efficiency coefficient ( E), and root-mean-squared error. The results indicate that three models are all accurate in reproducing the historical time series of groundwater levels. The comparisons of three models show that HW model is more accurate in predicting the groundwater levels than SARIMA and ITS models. It is recommended that additional studies explore this proposed method, which can be used in turn to facilitate the development and implementation of more effective and sustainable groundwater management strategies.
Time series modeling in traffic safety research.
Lavrenz, Steven M; Vlahogianni, Eleni I; Gkritza, Konstantina; Ke, Yue
2018-08-01
The use of statistical models for analyzing traffic safety (crash) data has been well-established. However, time series techniques have traditionally been underrepresented in the corresponding literature, due to challenges in data collection, along with a limited knowledge of proper methodology. In recent years, new types of high-resolution traffic safety data, especially in measuring driver behavior, have made time series modeling techniques an increasingly salient topic of study. Yet there remains a dearth of information to guide analysts in their use. This paper provides an overview of the state of the art in using time series models in traffic safety research, and discusses some of the fundamental techniques and considerations in classic time series modeling. It also presents ongoing and future opportunities for expanding the use of time series models, and explores newer modeling techniques, including computational intelligence models, which hold promise in effectively handling ever-larger data sets. The information contained herein is meant to guide safety researchers in understanding this broad area of transportation data analysis, and provide a framework for understanding safety trends that can influence policy-making. Copyright © 2017 Elsevier Ltd. All rights reserved.
Real time wave forecasting using wind time history and numerical model
NASA Astrophysics Data System (ADS)
Jain, Pooja; Deo, M. C.; Latha, G.; Rajendran, V.
Operational activities in the ocean like planning for structural repairs or fishing expeditions require real time prediction of waves over typical time duration of say a few hours. Such predictions can be made by using a numerical model or a time series model employing continuously recorded waves. This paper presents another option to do so and it is based on a different time series approach in which the input is in the form of preceding wind speed and wind direction observations. This would be useful for those stations where the costly wave buoys are not deployed and instead only meteorological buoys measuring wind are moored. The technique employs alternative artificial intelligence approaches of an artificial neural network (ANN), genetic programming (GP) and model tree (MT) to carry out the time series modeling of wind to obtain waves. Wind observations at four offshore sites along the east coast of India were used. For calibration purpose the wave data was generated using a numerical model. The predicted waves obtained using the proposed time series models when compared with the numerically generated waves showed good resemblance in terms of the selected error criteria. Large differences across the chosen techniques of ANN, GP, MT were not noticed. Wave hindcasting at the same time step and the predictions over shorter lead times were better than the predictions over longer lead times. The proposed method is a cost effective and convenient option when a site-specific information is desired.
Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones
NASA Astrophysics Data System (ADS)
Painter, S. L.; Coon, E. T.; Brooks, S. C.
2017-12-01
Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
Response moderation models for conditional dependence between response time and response accuracy.
Bolsinova, Maria; Tijmstra, Jesper; Molenaar, Dylan
2017-05-01
It is becoming more feasible and common to register response times in the application of psychometric tests. Researchers thus have the opportunity to jointly model response accuracy and response time, which provides users with more relevant information. The most common choice is to use the hierarchical model (van der Linden, 2007, Psychometrika, 72, 287), which assumes conditional independence between response time and accuracy, given a person's speed and ability. However, this assumption may be violated in practice if, for example, persons vary their speed or differ in their response strategies, leading to conditional dependence between response time and accuracy and confounding measurement. We propose six nested hierarchical models for response time and accuracy that allow for conditional dependence, and discuss their relationship to existing models. Unlike existing approaches, the proposed hierarchical models allow for various forms of conditional dependence in the model and allow the effect of continuous residual response time on response accuracy to be item-specific, person-specific, or both. Estimation procedures for the models are proposed, as well as two information criteria that can be used for model selection. Parameter recovery and usefulness of the information criteria are investigated using simulation, indicating that the procedure works well and is likely to select the appropriate model. Two empirical applications are discussed to illustrate the different types of conditional dependence that may occur in practice and how these can be captured using the proposed hierarchical models. © 2016 The British Psychological Society.
Ambient temperature and coronary heart disease mortality in Beijing, China: a time series study
2012-01-01
Background Many studies have examined the association between ambient temperature and mortality. However, less evidence is available on the temperature effects on coronary heart disease (CHD) mortality, especially in China. In this study, we examined the relationship between ambient temperature and CHD mortality in Beijing, China during 2000 to 2011. In addition, we compared time series and time-stratified case-crossover models for the non-linear effects of temperature. Methods We examined the effects of temperature on CHD mortality using both time series and time-stratified case-crossover models. We also assessed the effects of temperature on CHD mortality by subgroups: gender (female and male) and age (age > =65 and age < 65). We used a distributed lag non-linear model to examine the non-linear effects of temperature on CHD mortality up to 15 lag days. We used Akaike information criterion to assess the model fit for the two designs. Results The time series models had a better model fit than time-stratified case-crossover models. Both designs showed that the relationships between temperature and group-specific CHD mortality were non-linear. Extreme cold and hot temperatures significantly increased the risk of CHD mortality. Hot effects were acute and short-term, while cold effects were delayed by two days and lasted for five days. The old people and women were more sensitive to extreme cold and hot temperatures than young and men. Conclusions This study suggests that time series models performed better than time-stratified case-crossover models according to the model fit, even though they produced similar non-linear effects of temperature on CHD mortality. In addition, our findings indicate that extreme cold and hot temperatures increase the risk of CHD mortality in Beijing, China, particularly for women and old people. PMID:22909034
Real-time simulation of three-dimensional shoulder girdle and arm dynamics.
Chadwick, Edward K; Blana, Dimitra; Kirsch, Robert F; van den Bogert, Antonie J
2014-07-01
Electrical stimulation is a promising technology for the restoration of arm function in paralyzed individuals. Control of the paralyzed arm under electrical stimulation, however, is a challenging problem that requires advanced controllers and command interfaces for the user. A real-time model describing the complex dynamics of the arm would allow user-in-the-loop type experiments where the command interface and controller could be assessed. Real-time models of the arm previously described have not included the ability to model the independently controlled scapula and clavicle, limiting their utility for clinical applications of this nature. The goal of this study therefore was to evaluate the performance and mechanical behavior of a real-time, dynamic model of the arm and shoulder girdle. The model comprises seven segments linked by eleven degrees of freedom and actuated by 138 muscle elements. Polynomials were generated to describe the muscle lines of action to reduce computation time, and an implicit, first-order Rosenbrock formulation of the equations of motion was used to increase simulation step-size. The model simulated flexion of the arm faster than real time, simulation time being 92% of actual movement time on standard desktop hardware. Modeled maximum isometric torque values agreed well with values from the literature, showing that the model simulates the moment-generating behavior of a real human arm. The speed of the model enables experiments where the user controls the virtual arm and receives visual feedback in real time. The ability to optimize potential solutions in simulation greatly reduces the burden on the user during development.
Studies in astronomical time series analysis. I - Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1981-01-01
Several random process models in the time domain are defined and discussed. Attention is given to the moving average model, the autoregressive model, and relationships between and combinations of these models. Consideration is then given to methods for investigating pulse structure, procedures of model construction, computational methods, and numerical experiments. A FORTRAN algorithm of time series analysis has been developed which is relatively stable numerically. Results of test cases are given to study the effect of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the light curve of the quasar 3C 272 is considered as an example.
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
Time dependent turbulence modeling and analytical theories of turbulence
NASA Technical Reports Server (NTRS)
Rubinstein, R.
1993-01-01
By simplifying the direct interaction approximation (DIA) for turbulent shear flow, time dependent formulas are derived for the Reynolds stresses which can be included in two equation models. The Green's function is treated phenomenologically, however, following Smith and Yakhot, we insist on the short and long time limits required by DIA. For small strain rates, perturbative evaluation of the correlation function yields a time dependent theory which includes normal stress effects in simple shear flows. From this standpoint, the phenomenological Launder-Reece-Rodi model is obtained by replacing the Green's function by its long time limit. Eddy damping corrections to short time behavior initiate too quickly in this model; in contrast, the present theory exhibits strong suppression of eddy damping at short times. A time dependent theory for large strain rates is proposed in which large scales are governed by rapid distortion theory while small scales are governed by Kolmogorov inertial range dynamics. At short times and large strain rates, the theory closely matches rapid distortion theory, but at long times it relaxes to an eddy damping model.
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
Eliminating time dispersion from seismic wave modeling
NASA Astrophysics Data System (ADS)
Koene, Erik F. M.; Robertsson, Johan O. A.; Broggini, Filippo; Andersson, Fredrik
2018-04-01
We derive an expression for the error introduced by the second-order accurate temporal finite-difference (FD) operator, as present in the FD, pseudospectral and spectral element methods for seismic wave modeling applied to time-invariant media. The `time-dispersion' error speeds up the signal as a function of frequency and time step only. Time dispersion is thus independent of the propagation path, medium or spatial modeling error. We derive two transforms to either add or remove time dispersion from synthetic seismograms after a simulation. The transforms are compared to previous related work and demonstrated on wave modeling in acoustic as well as elastic media. In addition, an application to imaging is shown. The transforms enable accurate computation of synthetic seismograms at reduced cost, benefitting modeling applications in both exploration and global seismology.
Robust model predictive control for constrained continuous-time nonlinear systems
NASA Astrophysics Data System (ADS)
Sun, Tairen; Pan, Yongping; Zhang, Jun; Yu, Haoyong
2018-02-01
In this paper, a robust model predictive control (MPC) is designed for a class of constrained continuous-time nonlinear systems with bounded additive disturbances. The robust MPC consists of a nonlinear feedback control and a continuous-time model-based dual-mode MPC. The nonlinear feedback control guarantees the actual trajectory being contained in a tube centred at the nominal trajectory. The dual-mode MPC is designed to ensure asymptotic convergence of the nominal trajectory to zero. This paper extends current results on discrete-time model-based tube MPC and linear system model-based tube MPC to continuous-time nonlinear model-based tube MPC. The feasibility and robustness of the proposed robust MPC have been demonstrated by theoretical analysis and applications to a cart-damper springer system and a one-link robot manipulator.
Modeling Non-Gaussian Time Series with Nonparametric Bayesian Model.
Xu, Zhiguang; MacEachern, Steven; Xu, Xinyi
2015-02-01
We present a class of Bayesian copula models whose major components are the marginal (limiting) distribution of a stationary time series and the internal dynamics of the series. We argue that these are the two features with which an analyst is typically most familiar, and hence that these are natural components with which to work. For the marginal distribution, we use a nonparametric Bayesian prior distribution along with a cdf-inverse cdf transformation to obtain large support. For the internal dynamics, we rely on the traditionally successful techniques of normal-theory time series. Coupling the two components gives us a family of (Gaussian) copula transformed autoregressive models. The models provide coherent adjustments of time scales and are compatible with many extensions, including changes in volatility of the series. We describe basic properties of the models, show their ability to recover non-Gaussian marginal distributions, and use a GARCH modification of the basic model to analyze stock index return series. The models are found to provide better fit and improved short-range and long-range predictions than Gaussian competitors. The models are extensible to a large variety of fields, including continuous time models, spatial models, models for multiple series, models driven by external covariate streams, and non-stationary models.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
Applying MDA to SDR for Space to Model Real-time Issues
NASA Technical Reports Server (NTRS)
Blaser, Tammy M.
2007-01-01
NASA space communications systems have the challenge of designing SDRs with highly-constrained Size, Weight and Power (SWaP) resources. A study is being conducted to assess the effectiveness of applying the MDA Platform-Independent Model (PIM) and one or more Platform-Specific Models (PSM) specifically to address NASA space domain real-time issues. This paper will summarize our experiences with applying MDA to SDR for Space to model real-time issues. Real-time issues to be examined, measured, and analyzed are: meeting waveform timing requirements and efficiently applying Real-time Operating System (RTOS) scheduling algorithms, applying safety control measures, and SWaP verification. Real-time waveform algorithms benchmarked with the worst case environment conditions under the heaviest workload will drive the SDR for Space real-time PSM design.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
NASA Astrophysics Data System (ADS)
Alekseev, M. V.; Vozhakov, I. S.; Lezhnin, S. I.; Pribaturin, N. A.
2017-09-01
A comparative numerical simulation of the supercritical fluid outflow on the thermodynamic equilibrium and non-equilibrium relaxation models of phase transition for different times of relaxation has been performed. The model for the fixed relaxation time based on the experimentally determined radius of liquid droplets was compared with the model of dynamically changing relaxation time, calculated by the formula (7) and depending on local parameters. It is shown that the relaxation time varies significantly depending on the thermodynamic conditions of the two-phase medium in the course of outflowing. The application of the proposed model with dynamic relaxation time leads to qualitatively correct results. The model can be used for both vaporization and condensation processes. It is shown that the model can be improved on the basis of processing experimental data on the distribution of the droplet sizes formed during the breaking up of the liquid jet.
Two States Mapping Based Time Series Neural Network Model for Compensation Prediction Residual Error
NASA Astrophysics Data System (ADS)
Jung, Insung; Koo, Lockjo; Wang, Gi-Nam
2008-11-01
The objective of this paper was to design a model of human bio signal data prediction system for decreasing of prediction error using two states mapping based time series neural network BP (back-propagation) model. Normally, a lot of the industry has been applied neural network model by training them in a supervised manner with the error back-propagation algorithm for time series prediction systems. However, it still has got a residual error between real value and prediction result. Therefore, we designed two states of neural network model for compensation residual error which is possible to use in the prevention of sudden death and metabolic syndrome disease such as hypertension disease and obesity. We determined that most of the simulation cases were satisfied by the two states mapping based time series prediction model. In particular, small sample size of times series were more accurate than the standard MLP model.
Score Estimating Equations from Embedded Likelihood Functions under Accelerated Failure Time Model
NING, JING; QIN, JING; SHEN, YU
2014-01-01
SUMMARY The semiparametric accelerated failure time (AFT) model is one of the most popular models for analyzing time-to-event outcomes. One appealing feature of the AFT model is that the observed failure time data can be transformed to identically independent distributed random variables without covariate effects. We describe a class of estimating equations based on the score functions for the transformed data, which are derived from the full likelihood function under commonly used semiparametric models such as the proportional hazards or proportional odds model. The methods of estimating regression parameters under the AFT model can be applied to traditional right-censored survival data as well as more complex time-to-event data subject to length-biased sampling. We establish the asymptotic properties and evaluate the small sample performance of the proposed estimators. We illustrate the proposed methods through applications in two examples. PMID:25663727
Development and Validation of a New Air Carrier Block Time Prediction Model and Methodology
NASA Astrophysics Data System (ADS)
Litvay, Robyn Olson
Commercial airline operations rely on predicted block times as the foundation for critical, successive decisions that include fuel purchasing, crew scheduling, and airport facility usage planning. Small inaccuracies in the predicted block times have the potential to result in huge financial losses, and, with profit margins for airline operations currently almost nonexistent, potentially negate any possible profit. Although optimization techniques have resulted in many models targeting airline operations, the challenge of accurately predicting and quantifying variables months in advance remains elusive. The objective of this work is the development of an airline block time prediction model and methodology that is practical, easily implemented, and easily updated. Research was accomplished, and actual U.S., domestic, flight data from a major airline was utilized, to develop a model to predict airline block times with increased accuracy and smaller variance in the actual times from the predicted times. This reduction in variance represents tens of millions of dollars (U.S.) per year in operational cost savings for an individual airline. A new methodology for block time prediction is constructed using a regression model as the base, as it has both deterministic and probabilistic components, and historic block time distributions. The estimation of the block times for commercial, domestic, airline operations requires a probabilistic, general model that can be easily customized for a specific airline’s network. As individual block times vary by season, by day, and by time of day, the challenge is to make general, long-term estimations representing the average, actual block times while minimizing the variation. Predictions of block times for the third quarter months of July and August of 2011 were calculated using this new model. The resulting, actual block times were obtained from the Research and Innovative Technology Administration, Bureau of Transportation Statistics (Airline On-time Performance Data, 2008-2011) for comparison and analysis. Future block times are shown to be predicted with greater accuracy, without exception and network-wide, for a major, U.S., domestic airline.
2011-01-01
Background Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. Methods We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Results Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. Conclusions We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting. PMID:22023778
Kennedy, Curtis E; Turley, James P
2011-10-24
Thousands of children experience cardiac arrest events every year in pediatric intensive care units. Most of these children die. Cardiac arrest prediction tools are used as part of medical emergency team evaluations to identify patients in standard hospital beds that are at high risk for cardiac arrest. There are no models to predict cardiac arrest in pediatric intensive care units though, where the risk of an arrest is 10 times higher than for standard hospital beds. Current tools are based on a multivariable approach that does not characterize deterioration, which often precedes cardiac arrests. Characterizing deterioration requires a time series approach. The purpose of this study is to propose a method that will allow for time series data to be used in clinical prediction models. Successful implementation of these methods has the potential to bring arrest prediction to the pediatric intensive care environment, possibly allowing for interventions that can save lives and prevent disabilities. We reviewed prediction models from nonclinical domains that employ time series data, and identified the steps that are necessary for building predictive models using time series clinical data. We illustrate the method by applying it to the specific case of building a predictive model for cardiac arrest in a pediatric intensive care unit. Time course analysis studies from genomic analysis provided a modeling template that was compatible with the steps required to develop a model from clinical time series data. The steps include: 1) selecting candidate variables; 2) specifying measurement parameters; 3) defining data format; 4) defining time window duration and resolution; 5) calculating latent variables for candidate variables not directly measured; 6) calculating time series features as latent variables; 7) creating data subsets to measure model performance effects attributable to various classes of candidate variables; 8) reducing the number of candidate features; 9) training models for various data subsets; and 10) measuring model performance characteristics in unseen data to estimate their external validity. We have proposed a ten step process that results in data sets that contain time series features and are suitable for predictive modeling by a number of methods. We illustrated the process through an example of cardiac arrest prediction in a pediatric intensive care setting.
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
Transient Turbine Engine Modeling with Hardware-in-the-Loop Power Extraction (PREPRINT)
2008-07-01
Furthermore, it must be compatible with a real - time operating system that is capable of running the simulation. For some models, especially those that use...problem of interfacing the engine/control model to a real - time operating system and associated lab hardware becomes a problem of interfacing these...model in real-time. This requires the use of a real - time operating system and a compatible I/O (input/output) board. Figure 1 illustrates the HIL
Nonlinear Maps for Design of Discrete-Time Models of Neuronal Network Dynamics
2016-03-31
2016 Performance/Technic~ 03-01-2016- 03-31-2016 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Nonlinear Maps for Design of Discrete -Time Models of...simulations is to design a neuronal model in the form of difference equations that generates neuronal states in discrete moments of time. In this...responsive tiring patterns. We propose to use modern DSP ideas to develop new efficient approaches to the design of such discrete -time models for
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Linear system identification via backward-time observer models
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Phan, Minh Q.
1992-01-01
Presented here is an algorithm to compute the Markov parameters of a backward-time observer for a backward-time model from experimental input and output data. The backward-time observer Markov parameters are decomposed to obtain the backward-time system Markov parameters (backward-time pulse response samples) for the backward-time system identification. The identified backward-time system Markov parameters are used in the Eigensystem Realization Algorithm to identify a backward-time state-space model, which can be easily converted to the usual forward-time representation. If one reverses time in the model to be identified, what were damped true system modes become modes with negative damping, growing as the reversed time increases. On the other hand, the noise modes in the identification still maintain the property that they are stable. The shift from positive damping to negative damping of the true system modes allows one to distinguish these modes from noise modes. Experimental results are given to illustrate when and to what extent this concept works.
Incorporating Response Times in Item Response Theory Models of Reading Comprehension Fluency
ERIC Educational Resources Information Center
Su, Shiyang
2017-01-01
With the online assessment becoming mainstream and the recording of response times becoming straightforward, the importance of response times as a measure of psychological constructs has been recognized and the literature of modeling times has been growing during the last few decades. Previous studies have tried to formulate models and theories to…
Using Response Times for Item Selection in Adaptive Testing
ERIC Educational Resources Information Center
van der Linden, Wim J.
2008-01-01
Response times on items can be used to improve item selection in adaptive testing provided that a probabilistic model for their distribution is available. In this research, the author used a hierarchical modeling framework with separate first-level models for the responses and response times and a second-level model for the distribution of the…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Modeling turbidity and flow at daily steps in karst using ARIMA/ARFIMA-GARCH error models
NASA Astrophysics Data System (ADS)
Massei, N.
2013-12-01
Hydrological and physico-chemical variations recorded at karst springs usually reflect highly non-linear processes and the corresponding time series are then very often also highly non-linear. Among others, turbidity, as an important parameter regarding water quality and management, is a very complex response of karst systems to rain events, involving direct transfer of particles from point-source recharge as well as resuspension of particles previously deposited and stored within the system. For those reasons, turbidity modeling has not been well taken in karst hydrological models so far. Most of the time, the modeling approaches would involve stochastic linear models such ARIMA-type models and their derivatives (ARMA, ARMAX, ARIMAX, ARFIMA...). Yet, linear models usually fail to represent well the whole (stochastic) process variability, and their residuals still contain useful information that can be used to either understand the whole variability or to enhance short-term predictability and forecasting. Model residuals are actually not i.i.d., which can be identified by the fact that squared residuals still present clear and significant serial correlation. Indeed, high (low) amplitudes are followed in time by high (low) amplitudes, which can be seen on residuals time series as periods of time during which amplitudes are higher (lower) then the mean amplitude. This is known as the ARCH effet (AutoRegressive Conditional Heteroskedasticity), and the corresponding non-linear process affecting residuals of a linear model can be modeled using ARCH or generalized ARCH (GARCH) non-linear modeling, which approaches are very well known in econometrics. Here we investigated the capability of ARIMA-GARCH error models to represent a ~20-yr daily turbidity time series recorded at a karst spring used for water supply of the city of Le Havre (Upper Normandy, France). ARIMA and ARFIMA models were used to represent the mean behavior of the time series and the residuals clearly appeared to present a pronounced ARCH effect, as confirmed by Ljung-Box and McLeod-Li tests. We then identified and fitted GARCH models to the residuals of ARIMA and ARFIMA models in order to model the conditional variance and volatility of the turbidity time series. The results eventually showed that serial correlation was succesfully removed in the last standardized residuals of the GARCH model, and hence that the ARIMA-GARCH error model appeared consistent for modeling such time series. The approach finally improved short-term (e.g a few steps-ahead) turbidity forecasting.
Buonaccorsi, Giovanni A; Roberts, Caleb; Cheung, Sue; Watson, Yvonne; O'Connor, James P B; Davies, Karen; Jackson, Alan; Jayson, Gordon C; Parker, Geoff J M
2006-09-01
The quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) data is subject to model fitting errors caused by motion during the time-series data acquisition. However, the time-varying features that occur as a result of contrast enhancement can confound motion correction techniques based on conventional registration similarity measures. We have therefore developed a heuristic, locally controlled tracer kinetic model-driven registration procedure, in which the model accounts for contrast enhancement, and applied it to the registration of abdominal DCE-MRI data at high temporal resolution. Using severely motion-corrupted data sets that had been excluded from analysis in a clinical trial of an antiangiogenic agent, we compared the results obtained when using different models to drive the tracer kinetic model-driven registration with those obtained when using a conventional registration against the time series mean image volume. Using tracer kinetic model-driven registration, it was possible to improve model fitting by reducing the sum of squared errors but the improvement was only realized when using a model that adequately described the features of the time series data. The registration against the time series mean significantly distorted the time series data, as did tracer kinetic model-driven registration using a simpler model of contrast enhancement. When an appropriate model is used, tracer kinetic model-driven registration influences motion-corrupted model fit parameter estimates and provides significant improvements in localization in three-dimensional parameter maps. This has positive implications for the use of quantitative DCE-MRI for example in clinical trials of antiangiogenic or antivascular agents.
Zhang, Zhen; Yan, Peng; Jiang, Huan; Ye, Peiqing
2014-09-01
In this paper, we consider the discrete time-varying internal model-based control design for high precision tracking of complicated reference trajectories generated by time-varying systems. Based on a novel parallel time-varying internal model structure, asymptotic tracking conditions for the design of internal model units are developed, and a low order robust time-varying stabilizer is further synthesized. In a discrete time setting, the high precision tracking control architecture is deployed on a Voice Coil Motor (VCM) actuated servo gantry system, where numerical simulations and real time experimental results are provided, achieving the tracking errors around 3.5‰ for frequency-varying signals. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Real-time GIS data model and sensor web service platform for environmental data management.
Gong, Jianya; Geng, Jing; Chen, Zeqiang
2015-01-09
Effective environmental data management is meaningful for human health. In the past, environmental data management involved developing a specific environmental data management system, but this method often lacks real-time data retrieving and sharing/interoperating capability. With the development of information technology, a Geospatial Service Web method is proposed that can be employed for environmental data management. The purpose of this study is to determine a method to realize environmental data management under the Geospatial Service Web framework. A real-time GIS (Geographic Information System) data model and a Sensor Web service platform to realize environmental data management under the Geospatial Service Web framework are proposed in this study. The real-time GIS data model manages real-time data. The Sensor Web service platform is applied to support the realization of the real-time GIS data model based on the Sensor Web technologies. To support the realization of the proposed real-time GIS data model, a Sensor Web service platform is implemented. Real-time environmental data, such as meteorological data, air quality data, soil moisture data, soil temperature data, and landslide data, are managed in the Sensor Web service platform. In addition, two use cases of real-time air quality monitoring and real-time soil moisture monitoring based on the real-time GIS data model in the Sensor Web service platform are realized and demonstrated. The total time efficiency of the two experiments is 3.7 s and 9.2 s. The experimental results show that the method integrating real-time GIS data model and Sensor Web Service Platform is an effective way to manage environmental data under the Geospatial Service Web framework.
A multi-element cosmological model with a complex space-time topology
NASA Astrophysics Data System (ADS)
Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2015-02-01
Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.
A new time scale based k-epsilon model for near wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1992-01-01
A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.
New time scale based k-epsilon model for near-wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1993-01-01
A k-epsilon model is proposed for wall bonded turbulent flows. In this model, the eddy viscosity is characterized by a turbulent velocity scale and a turbulent time scale. The time scale is bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using this time scale and no singularity exists at the wall. The damping function used in the eddy viscosity is chosen to be a function of R(sub y) = (k(sup 1/2)y)/v instead of y(+). Hence, the model could be used for flows with separation. The model constants used are the same as in the high Reynolds number standard k-epsilon model. Thus, the proposed model will be also suitable for flows far from the wall. Turbulent channel flows at different Reynolds numbers and turbulent boundary layer flows with and without pressure gradient are calculated. Results show that the model predictions are in good agreement with direct numerical simulation and experimental data.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
A simple analytical model for dynamics of time-varying target leverage ratios
NASA Astrophysics Data System (ADS)
Lo, C. F.; Hui, C. H.
2012-03-01
In this paper we have formulated a simple theoretical model for the dynamics of the time-varying target leverage ratio of a firm under some assumptions based upon empirical observations. In our theoretical model the time evolution of the target leverage ratio of a firm can be derived self-consistently from a set of coupled Ito's stochastic differential equations governing the leverage ratios of an ensemble of firms by the nonlinear Fokker-Planck equation approach. The theoretically derived time paths of the target leverage ratio bear great resemblance to those used in the time-dependent stationary-leverage (TDSL) model [Hui et al., Int. Rev. Financ. Analy. 15, 220 (2006)]. Thus, our simple model is able to provide a theoretical foundation for the selected time paths of the target leverage ratio in the TDSL model. We also examine how the pace of the adjustment of a firm's target ratio, the volatility of the leverage ratio and the current leverage ratio affect the dynamics of the time-varying target leverage ratio. Hence, with the proposed dynamics of the time-dependent target leverage ratio, the TDSL model can be readily applied to generate the default probabilities of individual firms and to assess the default risk of the firms.
Can We Predict Patient Wait Time?
Pianykh, Oleg S; Rosenthal, Daniel I
2015-10-01
The importance of patient wait-time management and predictability can hardly be overestimated: For most hospitals, it is the patient queues that drive and define every bit of clinical workflow. The objective of this work was to study the predictability of patient wait time and identify its most influential predictors. To solve this problem, we developed a comprehensive list of 25 wait-related parameters, suggested in earlier work and observed in our own experiments. All parameters were chosen as derivable from a typical Hospital Information System dataset. The parameters were fed into several time-predicting models, and the best parameter subsets, discovered through exhaustive model search, were applied to a large sample of actual patient wait data. We were able to discover the most efficient wait-time prediction factors and models, such as the line-size models introduced in this work. Moreover, these models proved to be equally accurate and computationally efficient. Finally, the selected models were implemented in our patient waiting areas, displaying predicted wait times on the monitors located at the front desks. The limitations of these models are also discussed. Optimal regression models based on wait-line sizes can provide accurate and efficient predictions for patient wait time. Copyright © 2015 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Time series models on analysing mortality rates and acute childhood lymphoid leukaemia.
Kis, Maria
2005-01-01
In this paper we demonstrate applying time series models on medical research. The Hungarian mortality rates were analysed by autoregressive integrated moving average models and seasonal time series models examined the data of acute childhood lymphoid leukaemia.The mortality data may be analysed by time series methods such as autoregressive integrated moving average (ARIMA) modelling. This method is demonstrated by two examples: analysis of the mortality rates of ischemic heart diseases and analysis of the mortality rates of cancer of digestive system. Mathematical expressions are given for the results of analysis. The relationships between time series of mortality rates were studied with ARIMA models. Calculations of confidence intervals for autoregressive parameters by tree methods: standard normal distribution as estimation and estimation of the White's theory and the continuous time case estimation. Analysing the confidence intervals of the first order autoregressive parameters we may conclude that the confidence intervals were much smaller than other estimations by applying the continuous time estimation model.We present a new approach to analysing the occurrence of acute childhood lymphoid leukaemia. We decompose time series into components. The periodicity of acute childhood lymphoid leukaemia in Hungary was examined using seasonal decomposition time series method. The cyclic trend of the dates of diagnosis revealed that a higher percent of the peaks fell within the winter months than in the other seasons. This proves the seasonal occurrence of the childhood leukaemia in Hungary.
The Effect of Ongoing Exposure Dynamics in Dose Response Relationships
Pujol, Josep M.; Eisenberg, Joseph E.; Haas, Charles N.; Koopman, James S.
2009-01-01
Characterizing infectivity as a function of pathogen dose is integral to microbial risk assessment. Dose-response experiments usually administer doses to subjects at one time. Phenomenological models of the resulting data, such as the exponential and the Beta-Poisson models, ignore dose timing and assume independent risks from each pathogen. Real world exposure to pathogens, however, is a sequence of discrete events where concurrent or prior pathogen arrival affects the capacity of immune effectors to engage and kill newly arriving pathogens. We model immune effector and pathogen interactions during the period before infection becomes established in order to capture the dynamics generating dose timing effects. Model analysis reveals an inverse relationship between the time over which exposures accumulate and the risk of infection. Data from one time dose experiments will thus overestimate per pathogen infection risks of real world exposures. For instance, fitting our model to one time dosing data reveals a risk of 0.66 from 313 Cryptosporidium parvum pathogens. When the temporal exposure window is increased 100-fold using the same parameters fitted by our model to the one time dose data, the risk of infection is reduced to 0.09. Confirmation of this risk prediction requires data from experiments administering doses with different timings. Our model demonstrates that dose timing could markedly alter the risks generated by airborne versus fomite transmitted pathogens. PMID:19503605
An algebraic method for constructing stable and consistent autoregressive filters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harlim, John, E-mail: jharlim@psu.edu; Department of Meteorology, the Pennsylvania State University, University Park, PA 16802; Hong, Hoon, E-mail: hong@ncsu.edu
2015-02-15
In this paper, we introduce an algebraic method to construct stable and consistent univariate autoregressive (AR) models of low order for filtering and predicting nonlinear turbulent signals with memory depth. By stable, we refer to the classical stability condition for the AR model. By consistent, we refer to the classical consistency constraints of Adams–Bashforth methods of order-two. One attractive feature of this algebraic method is that the model parameters can be obtained without directly knowing any training data set as opposed to many standard, regression-based parameterization methods. It takes only long-time average statistics as inputs. The proposed method provides amore » discretization time step interval which guarantees the existence of stable and consistent AR model and simultaneously produces the parameters for the AR models. In our numerical examples with two chaotic time series with different characteristics of decaying time scales, we find that the proposed AR models produce significantly more accurate short-term predictive skill and comparable filtering skill relative to the linear regression-based AR models. These encouraging results are robust across wide ranges of discretization times, observation times, and observation noise variances. Finally, we also find that the proposed model produces an improved short-time prediction relative to the linear regression-based AR-models in forecasting a data set that characterizes the variability of the Madden–Julian Oscillation, a dominant tropical atmospheric wave pattern.« less
NASA Technical Reports Server (NTRS)
Nowak, Michael A.; Wilms, Joern; Vaughan, Brian A.; Dove, James B.; Begelman, Mitchell C.
1999-01-01
We have recently shown that a 'sphere + disk' geometry Compton corona model provides a good description of Rossi X-ray Timing Explorer (RXTE) observations of the hard/low state of Cygnus X-1. Separately, we have analyzed the temporal data provided by RXTE. In this paper we consider the implications of this timing analysis for our best-fit 'sphere + disk' Comptonization models. We focus our attention on the observed Fourier frequency-dependent time delays between hard and soft photons. We consider whether the observed time delays are: created in the disk but are merely reprocessed by the corona; created by differences between the hard and soft photon diffusion times in coronae with extremely large radii; or are due to 'propagation' of disturbances through the corona. We find that the time delays are most likely created directly within the corona; however, it is currently uncertain which specific model is the most likely explanation. Models that posit a large coronal radius [or equivalently, a large Advection Dominated Accretion Flow (ADAF) region] do not fully address all the details of the observed spectrum. The Compton corona models that do address the full spectrum do not contain dynamical information. We show, however, that simple phenomenological propagation models for the observed time delays for these latter models imply extremely slow characteristic propagation speeds within the coronal region.
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
NASA Astrophysics Data System (ADS)
Prahutama, Alan; Suparti; Wahyu Utami, Tiani
2018-03-01
Regression analysis is an analysis to model the relationship between response variables and predictor variables. The parametric approach to the regression model is very strict with the assumption, but nonparametric regression model isn’t need assumption of model. Time series data is the data of a variable that is observed based on a certain time, so if the time series data wanted to be modeled by regression, then we should determined the response and predictor variables first. Determination of the response variable in time series is variable in t-th (yt), while the predictor variable is a significant lag. In nonparametric regression modeling, one developing approach is to use the Fourier series approach. One of the advantages of nonparametric regression approach using Fourier series is able to overcome data having trigonometric distribution. In modeling using Fourier series needs parameter of K. To determine the number of K can be used Generalized Cross Validation method. In inflation modeling for the transportation sector, communication and financial services using Fourier series yields an optimal K of 120 parameters with R-square 99%. Whereas if it was modeled by multiple linear regression yield R-square 90%.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Verification of models for ballistic movement time and endpoint variability.
Lin, Ray F; Drury, Colin G
2013-01-01
A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.
NASA Astrophysics Data System (ADS)
He, Y.-X.; Angus, D. A.; Blanchard, T. D.; Wang, G.-L.; Yuan, S.-Y.; Garcia, A.
2016-04-01
Extraction of fluids from subsurface reservoirs induces changes in pore pressure, leading not only to geomechanical changes, but also perturbations in seismic velocities and hence observable seismic attributes. Time-lapse seismic analysis can be used to estimate changes in subsurface hydromechanical properties and thus act as a monitoring tool for geological reservoirs. The ability to observe and quantify changes in fluid, stress and strain using seismic techniques has important implications for monitoring risk not only for petroleum applications but also for geological storage of CO2 and nuclear waste scenarios. In this paper, we integrate hydromechanical simulation results with rock physics models and full-waveform seismic modelling to assess time-lapse seismic attribute resolution for dynamic reservoir characterization and hydromechanical model calibration. The time-lapse seismic simulations use a dynamic elastic reservoir model based on a North Sea deep reservoir undergoing large pressure changes. The time-lapse seismic traveltime shifts and time strains calculated from the modelled and processed synthetic data sets (i.e. pre-stack and post-stack data) are in a reasonable agreement with the true earth models, indicating the feasibility of using 1-D strain rock physics transform and time-lapse seismic processing methodology. Estimated vertical traveltime shifts for the overburden and the majority of the reservoir are within ±1 ms of the true earth model values, indicating that the time-lapse technique is sufficiently accurate for predicting overburden velocity changes and hence geomechanical effects. Characterization of deeper structure below the overburden becomes less accurate, where more advanced time-lapse seismic processing and migration is needed to handle the complex geometry and strong lateral induced velocity changes. Nevertheless, both migrated full-offset pre-stack and near-offset post-stack data image the general features of both the overburden and reservoir units. More importantly, the results from this study indicate that integrated seismic and hydromechanical modelling can help constrain time-lapse uncertainty and hence reduce risk due to fluid extraction and injection.
Superdiffusion in a non-Markovian random walk model with a Gaussian memory profile
NASA Astrophysics Data System (ADS)
Borges, G. M.; Ferreira, A. S.; da Silva, M. A. A.; Cressoni, J. C.; Viswanathan, G. M.; Mariz, A. M.
2012-09-01
Most superdiffusive Non-Markovian random walk models assume that correlations are maintained at all time scales, e.g., fractional Brownian motion, Lévy walks, the Elephant walk and Alzheimer walk models. In the latter two models the random walker can always "remember" the initial times near t = 0. Assuming jump size distributions with finite variance, the question naturally arises: is superdiffusion possible if the walker is unable to recall the initial times? We give a conclusive answer to this general question, by studying a non-Markovian model in which the walker's memory of the past is weighted by a Gaussian centered at time t/2, at which time the walker had one half the present age, and with a standard deviation σt which grows linearly as the walker ages. For large widths we find that the model behaves similarly to the Elephant model, but for small widths this Gaussian memory profile model behaves like the Alzheimer walk model. We also report that the phenomenon of amnestically induced persistence, known to occur in the Alzheimer walk model, arises in the Gaussian memory profile model. We conclude that memory of the initial times is not a necessary condition for generating (log-periodic) superdiffusion. We show that the phenomenon of amnestically induced persistence extends to the case of a Gaussian memory profile.
Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi
2018-04-15
Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Time and outcome framing in intertemporal tradeoffs.
Scholten, Marc; Read, Daniel
2013-07-01
A robust anomaly in intertemporal choice is the delay-speedup asymmetry: Receipts are discounted more, and payments are discounted less, when delayed than when expedited over the same interval. We developed 2 versions of the tradeoff model (Scholten & Read, 2010) to address such situations, in which an outcome is expected at a given time but then its timing is changed. The outcome framing model generalizes the approach taken by the hyperbolic discounting model (Loewenstein & Prelec, 1992): Not obtaining a positive outcome when expected is a worse than expected state, to which people are over-responsive, or hypersensitive, and not incurring a negative outcome when expected is a better than expected state, to which people are under-responsive, or hyposensitive. The time framing model takes a new approach: Delaying a positive outcome or speeding up a negative one involves a loss of time to which people are hypersensitive, and speeding up a positive outcome or delaying a negative one involves a gain of time to which people are hyposensitive. We compare the models on their quantitative predictions of indifference data from matching and preference data from choice. The time framing model systematically outperforms the outcome framing model. PsycINFO Database Record (c) 2013 APA, all rights reserved.
A perturbative approach for enhancing the performance of time series forecasting.
de Mattos Neto, Paulo S G; Ferreira, Tiago A E; Lima, Aranildo R; Vasconcelos, Germano C; Cavalcanti, George D C
2017-04-01
This paper proposes a method to perform time series prediction based on perturbation theory. The approach is based on continuously adjusting an initial forecasting model to asymptotically approximate a desired time series model. First, a predictive model generates an initial forecasting for a time series. Second, a residual time series is calculated as the difference between the original time series and the initial forecasting. If that residual series is not white noise, then it can be used to improve the accuracy of the initial model and a new predictive model is adjusted using residual series. The whole process is repeated until convergence or the residual series becomes white noise. The output of the method is then given by summing up the outputs of all trained predictive models in a perturbative sense. To test the method, an experimental investigation was conducted on six real world time series. A comparison was made with six other methods experimented and ten other results found in the literature. Results show that not only the performance of the initial model is significantly improved but also the proposed method outperforms the other results previously published. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Comparison and Evaluation of Real-Time Software Systems Modeling Languages
NASA Technical Reports Server (NTRS)
Evensen, Kenneth D.; Weiss, Kathryn Anne
2010-01-01
A model-driven approach to real-time software systems development enables the conceptualization of software, fostering a more thorough understanding of its often complex architecture and behavior while promoting the documentation and analysis of concerns common to real-time embedded systems such as scheduling, resource allocation, and performance. Several modeling languages have been developed to assist in the model-driven software engineering effort for real-time systems, and these languages are beginning to gain traction with practitioners throughout the aerospace industry. This paper presents a survey of several real-time software system modeling languages, namely the Architectural Analysis and Design Language (AADL), the Unified Modeling Language (UML), Systems Modeling Language (SysML), the Modeling and Analysis of Real-Time Embedded Systems (MARTE) UML profile, and the AADL for UML profile. Each language has its advantages and disadvantages, and in order to adequately describe a real-time software system's architecture, a complementary use of multiple languages is almost certainly necessary. This paper aims to explore these languages in the context of understanding the value each brings to the model-driven software engineering effort and to determine if it is feasible and practical to combine aspects of the various modeling languages to achieve more complete coverage in architectural descriptions. To this end, each language is evaluated with respect to a set of criteria such as scope, formalisms, and architectural coverage. An example is used to help illustrate the capabilities of the various languages.
Real-Time Global Nonlinear Aerodynamic Modeling for Learn-To-Fly
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
Flight testing and modeling techniques were developed to accurately identify global nonlinear aerodynamic models for aircraft in real time. The techniques were developed and demonstrated during flight testing of a remotely-piloted subscale propeller-driven fixed-wing aircraft using flight test maneuvers designed to simulate a Learn-To-Fly scenario. Prediction testing was used to evaluate the quality of the global models identified in real time. The real-time global nonlinear aerodynamic modeling algorithm will be integrated and further tested with learning adaptive control and guidance for NASA Learn-To-Fly concept flight demonstrations.
Modeling Information Accumulation in Psychological Tests Using Item Response Times
ERIC Educational Resources Information Center
Ranger, Jochen; Kuhn, Jörg-Tobias
2015-01-01
In this article, a latent trait model is proposed for the response times in psychological tests. The latent trait model is based on the linear transformation model and subsumes popular models from survival analysis, like the proportional hazards model and the proportional odds model. Core of the model is the assumption that an unspecified monotone…
Incorporating time-delays in S-System model for reverse engineering genetic networks.
Chowdhury, Ahsan Raja; Chetty, Madhu; Vinh, Nguyen Xuan
2013-06-18
In any gene regulatory network (GRN), the complex interactions occurring amongst transcription factors and target genes can be either instantaneous or time-delayed. However, many existing modeling approaches currently applied for inferring GRNs are unable to represent both these interactions simultaneously. As a result, all these approaches cannot detect important interactions of the other type. S-System model, a differential equation based approach which has been increasingly applied for modeling GRNs, also suffers from this limitation. In fact, all S-System based existing modeling approaches have been designed to capture only instantaneous interactions, and are unable to infer time-delayed interactions. In this paper, we propose a novel Time-Delayed S-System (TDSS) model which uses a set of delay differential equations to represent the system dynamics. The ability to incorporate time-delay parameters in the proposed S-System model enables simultaneous modeling of both instantaneous and time-delayed interactions. Furthermore, the delay parameters are not limited to just positive integer values (corresponding to time stamps in the data), but can also take fractional values. Moreover, we also propose a new criterion for model evaluation exploiting the sparse and scale-free nature of GRNs to effectively narrow down the search space, which not only reduces the computation time significantly but also improves model accuracy. The evaluation criterion systematically adapts the max-min in-degrees and also systematically balances the effect of network accuracy and complexity during optimization. The four well-known performance measures applied to the experimental studies on synthetic networks with various time-delayed regulations clearly demonstrate that the proposed method can capture both instantaneous and delayed interactions correctly with high precision. The experiments carried out on two well-known real-life networks, namely IRMA and SOS DNA repair network in Escherichia coli show a significant improvement compared with other state-of-the-art approaches for GRN modeling.
Time dependence of breakdown in a global fiber-bundle model with continuous damage.
Moral, L; Moreno, Y; Gómez, J B; Pacheco, A F
2001-06-01
A time-dependent global fiber-bundle model of fracture with continuous damage is formulated in terms of a set of coupled nonlinear differential equations. A first integral of this set is analytically obtained. The time evolution of the system is studied by applying a discrete probabilistic method. Several results are discussed emphasizing their differences with the standard time-dependent model. The results obtained show that with this simple model a variety of experimental observations can be qualitatively reproduced.
Factoring out nondecision time in choice reaction time data: Theory and implications.
Verdonck, Stijn; Tuerlinckx, Francis
2016-03-01
Choice reaction time (RT) experiments are an invaluable tool in psychology and neuroscience. A common assumption is that the total choice response time is the sum of a decision and a nondecision part (time spent on perceptual and motor processes). While the decision part is typically modeled very carefully (commonly with diffusion models), a simple and ad hoc distribution (mostly uniform) is assumed for the nondecision component. Nevertheless, it has been shown that the misspecification of the nondecision time can severely distort the decision model parameter estimates. In this article, we propose an alternative approach to the estimation of choice RT models that elegantly bypasses the specification of the nondecision time distribution by means of an unconventional convolution of data and decision model distributions (hence called the D*M approach). Once the decision model parameters have been estimated, it is possible to compute a nonparametric estimate of the nondecision time distribution. The technique is tested on simulated data, and is shown to systematically remove traditional estimation bias related to misspecified nondecision time, even for a relatively small number of observations. The shape of the actual underlying nondecision time distribution can also be recovered. Next, the D*M approach is applied to a selection of existing diffusion model application articles. For all of these studies, substantial quantitative differences with the original analyses are found. For one study, these differences radically alter its final conclusions, underlining the importance of our approach. Additionally, we find that strongly right skewed nondecision time distributions are not at all uncommon. (c) 2016 APA, all rights reserved).
Yang, Xilin; Kong, Alice Ps; Luk, Andrea Oy; Ozaki, Risa; Ko, Gary Tc; Ma, Ronald Cw; Chan, Juliana Cn; So, Wing Yee
2014-01-01
Pharmacoepidemiologic analysis can confirm whether drug efficacy in a randomized controlled trial (RCT) translates to effectiveness in real settings. We examined methods used to control for immortal time bias in an analysis of renin-angiotensin system (RAS) inhibitors as the reference cardioprotective drug. We analyzed data from 3928 patients with type 2 diabetes who were recruited into the Hong Kong Diabetes Registry between 1996 and 2005 and followed up to July 30, 2005. Different Cox models were used to obtain hazard ratios (HRs) for cardiovascular disease (CVD) associated with RAS inhibitors. These HRs were then compared to the HR of 0.92 reported in a recent meta-analysis of RCTs. During a median follow-up period of 5.45 years, 7.23% (n = 284) patients developed CVD and 38.7% (n = 1519) were started on RAS inhibitors, with 39.1% of immortal time among the users. In multivariable analysis, time-dependent drug-exposure Cox models and Cox models that moved immortal time from users to nonusers both severely inflated the HR, and time-fixed models that included immortal time deflated the HR. Use of time-fixed Cox models that excluded immortal time resulted in a HR of only 0.89 (95% CI, 0.68-1.17) for CVD associated with RAS inhibitors, which is closer to the values reported in RCTs. In pharmacoepidemiologic analysis, time-dependent drug exposure models and models that move immortal time from users to nonusers may introduce substantial bias in investigations of the effects of RAS inhibitors on CVD in type 2 diabetes.
Consensus time and conformity in the adaptive voter model
NASA Astrophysics Data System (ADS)
Rogers, Tim; Gross, Thilo
2013-09-01
The adaptive voter model is a paradigmatic model in the study of opinion formation. Here we propose an extension for this model, in which conflicts are resolved by obtaining another opinion, and analytically study the time required for consensus to emerge. Our results shed light on the rich phenomenology of both the original and extended adaptive voter models, including a dynamical phase transition in the scaling behavior of the mean time to consensus.
NASA Astrophysics Data System (ADS)
van der Heijden, Sven; Callau Poduje, Ana; Müller, Hannes; Shehu, Bora; Haberlandt, Uwe; Lorenz, Manuel; Wagner, Sven; Kunstmann, Harald; Müller, Thomas; Mosthaf, Tobias; Bárdossy, András
2015-04-01
For the design and operation of urban drainage systems with numerical simulation models, long, continuous precipitation time series with high temporal resolution are necessary. Suitable observed time series are rare. As a result, intelligent design concepts often use uncertain or unsuitable precipitation data, which renders them uneconomic or unsustainable. An expedient alternative to observed data is the use of long, synthetic rainfall time series as input for the simulation models. Within the project SYNOPSE, several different methods to generate synthetic precipitation data for urban drainage modelling are advanced, tested, and compared. The presented study compares four different approaches of precipitation models regarding their ability to reproduce rainfall and runoff characteristics. These include one parametric stochastic model (alternating renewal approach), one non-parametric stochastic model (resampling approach), one downscaling approach from a regional climate model, and one disaggregation approach based on daily precipitation measurements. All four models produce long precipitation time series with a temporal resolution of five minutes. The synthetic time series are first compared to observed rainfall reference time series. Comparison criteria include event based statistics like mean dry spell and wet spell duration, wet spell amount and intensity, long term means of precipitation sum and number of events, and extreme value distributions for different durations. Then they are compared regarding simulated discharge characteristics using an urban hydrological model on a fictitious sewage network. First results show a principal suitability of all rainfall models but with different strengths and weaknesses regarding the different rainfall and runoff characteristics considered.
NASA Astrophysics Data System (ADS)
Fen, Cao; XuHai, Yang; ZhiGang, Li; ChuGang, Feng
2016-08-01
The normal consecutive observing model in Chinese Area Positioning System (CAPS) can only supply observations of one GEO satellite in 1 day from one station. However, this can't satisfy the project need for observing many GEO satellites in 1 day. In order to obtain observations of several GEO satellites in 1 day like GPS/GLONASS/Galileo/BeiDou, the time-sharing observing model for GEO satellites in CAPS needs research. The principle of time-sharing observing model is illuminated with subsequent Precise Orbit Determination (POD) experiments using simulated time-sharing observations in 2005 and the real time-sharing observations in 2015. From time-sharing simulation experiments before 2014, the time-sharing observing 6 GEO satellites every 2 h has nearly the same orbit precision with the consecutive observing model. From POD experiments using the real time-sharing observations, POD precision for ZX12# and Yatai7# are about 3.234 m and 2.570 m, respectively, which indicates the time-sharing observing model is appropriate for CBTR system and can realize observing many GEO satellites in 1 day.
New analytic results for speciation times in neutral models.
Gernhard, Tanja
2008-05-01
In this paper, we investigate the standard Yule model, and a recently studied model of speciation and extinction, the "critical branching process." We develop an analytic way-as opposed to the common simulation approach-for calculating the speciation times in a reconstructed phylogenetic tree. Simple expressions for the density and the moments of the speciation times are obtained. Methods for dating a speciation event become valuable, if for the reconstructed phylogenetic trees, no time scale is available. A missing time scale could be due to supertree methods, morphological data, or molecular data which violates the molecular clock. Our analytic approach is, in particular, useful for the model with extinction, since simulations of birth-death processes which are conditioned on obtaining n extant species today are quite delicate. Further, simulations are very time consuming for big n under both models.
Quantum time crystal by decoherence: Proposal with an incommensurate charge density wave ring
NASA Astrophysics Data System (ADS)
Nakatsugawa, K.; Fujii, T.; Tanda, S.
2017-09-01
We show that time translation symmetry of a ring system with a macroscopic quantum ground state is broken by decoherence. In particular, we consider a ring-shaped incommensurate charge density wave (ICDW ring) threaded by a fluctuating magnetic flux: the Caldeira-Leggett model is used to model the fluctuating flux as a bath of harmonic oscillators. We show that the charge density expectation value of a quantized ICDW ring coupled to its environment oscillates periodically. The Hamiltonians considered in this model are time independent unlike "Floquet time crystals" considered recently. Our model forms a metastable quantum time crystal with a finite length in space and in time.
State-space prediction model for chaotic time series
NASA Astrophysics Data System (ADS)
Alparslan, A. K.; Sayar, M.; Atilgan, A. R.
1998-08-01
A simple method for predicting the continuation of scalar chaotic time series ahead in time is proposed. The false nearest neighbors technique in connection with the time-delayed embedding is employed so as to reconstruct the state space. A local forecasting model based upon the time evolution of the topological neighboring in the reconstructed phase space is suggested. A moving root-mean-square error is utilized in order to monitor the error along the prediction horizon. The model is tested for the convection amplitude of the Lorenz model. The results indicate that for approximately 100 cycles of the training data, the prediction follows the actual continuation very closely about six cycles. The proposed model, like other state-space forecasting models, captures the long-term behavior of the system due to the use of spatial neighbors in the state space.
Wetting and drying of soil in response to precipitation: Data analysis, modeling, and forecasting
Basak, Aniruddha; Kulkarni, Chinmay; Schmidt, Kevin M.; Mengshoel, Ole
2016-01-01
This paper investigates methods to analyze and forecast soil moisture time series. We extend an existing Antecedent Water Index (AWI) model, which expresses soil moisture as a function of time and rainfall. Unfortunately, the existing AWI model does not forecast effectively for time periods beyond a few hours. To overcome this limitation, we develop a novel AWI-based model. Our model accumulates rainfall over a time interval and can fit a diverse range of wetting and drying curves. In addition, parameters in our model reflect hydrologic redistribution processes of gravity and suction.We validate our models using experimental soil moisture and rainfall time series data collected from steep gradient post-wildfire sites in Southern California, where rapid landscape change was observed in response to small to moderate rain storms. We found that our novel model fits the data for three distinct soil textures, occurring at different depths below the ground surface (5, 15, and 30 cm). Our model also successfully forecasts soil moisture trends, such as drying and wetting rate.
A comparison between MS-VECM and MS-VECMX on economic time series data
NASA Astrophysics Data System (ADS)
Phoong, Seuk-Wai; Ismail, Mohd Tahir; Sek, Siok-Kun
2014-07-01
Multivariate Markov switching models able to provide useful information on the study of structural change data since the regime switching model can analyze the time varying data and capture the mean and variance in the series of dependence structure. This paper will investigates the oil price and gold price effects on Malaysia, Singapore, Thailand and Indonesia stock market returns. Two forms of Multivariate Markov switching models are used namely the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model (MSMH-VECM) and the mean adjusted heteroskedasticity Markov Switching Vector Error Correction Model with exogenous variable (MSMH-VECMX). The reason for using these two models are to capture the transition probabilities of the data since real financial time series data always exhibit nonlinear properties such as regime switching, cointegrating relations, jumps or breaks passing the time. A comparison between these two models indicates that MSMH-VECM model able to fit the time series data better than the MSMH-VECMX model. In addition, it was found that oil price and gold price affected the stock market changes in the four selected countries.
A comparison of the stochastic and machine learning approaches in hydrologic time series forecasting
NASA Astrophysics Data System (ADS)
Kim, T.; Joo, K.; Seo, J.; Heo, J. H.
2016-12-01
Hydrologic time series forecasting is an essential task in water resources management and it becomes more difficult due to the complexity of runoff process. Traditional stochastic models such as ARIMA family has been used as a standard approach in time series modeling and forecasting of hydrological variables. Due to the nonlinearity in hydrologic time series data, machine learning approaches has been studied with the advantage of discovering relevant features in a nonlinear relation among variables. This study aims to compare the predictability between the traditional stochastic model and the machine learning approach. Seasonal ARIMA model was used as the traditional time series model, and Random Forest model which consists of decision tree and ensemble method using multiple predictor approach was applied as the machine learning approach. In the application, monthly inflow data from 1986 to 2015 of Chungju dam in South Korea were used for modeling and forecasting. In order to evaluate the performances of the used models, one step ahead and multi-step ahead forecasting was applied. Root mean squared error and mean absolute error of two models were compared.
A validation of ground ambulance pre-hospital times modeled using geographic information systems.
Patel, Alka B; Waters, Nigel M; Blanchard, Ian E; Doig, Christopher J; Ghali, William A
2012-10-03
Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7-8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area.
Würthwein, Gudrun; Lanvers-Kaminsky, Claudia; Hempel, Georg; Gastine, Silke; Möricke, Anja; Schrappe, Martin; Karlsson, Mats O; Boos, Joachim
2017-12-01
The pharmacokinetics of the polyethylene glycol (PEG)-conjugated asparaginase Oncaspar ® are characterized by an increase in elimination over time. The focus of our analysis is the better understanding of this time-dependency. In paediatric acute lymphoblastic leukemia therapy (AIEOP-BFM ALL 2009), two administrations of Oncaspar ® (2500 U/m 2 intravenously) in induction phase (14-day interval) and one single administration in reinduction were followed by weekly monitoring of asparaginase activity. Non-linear mixed-effects modeling techniques (NONMEM) were used. Samples indicating immunological inactivation were excluded to describe the pharmacokinetics under standard conditions. Models with time-constant or time-varying clearance (CL) as well as transit compartment models with an increase in CL over a chain of compartments were investigated. Models with time-constant elimination could not adequately describe 6107 asparaginase activities from 1342 patients. Implementing a time-varying CL improved the fit. Modeling an increase of CL over time after dose (E max - and Weibull-functions) were superior to models with an increase of CL over time after the first administration. However, a transit compartment model came out to be the best structural model. The increase in elimination of PEGylated asparaginase appears to be driven by physicochemical processes that are drug-related. The observed hydrolytically in vitro instability of the drug leads to the hypothesis that this increase in CL might be due to an in vivo hydrolysis of the instable ester bond between PEG and the enzyme combined with an increased elimination of the partly de-PEGylated enzyme (Trial registered at www.clinicaltrials.gov , NCT0111744).
Huang, Yangxin; Lu, Xiaosun; Chen, Jiaqing; Liang, Juan; Zangmeister, Miriam
2017-10-27
Longitudinal and time-to-event data are often observed together. Finite mixture models are currently used to analyze nonlinear heterogeneous longitudinal data, which, by releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, can cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, and be associated with clinically important time-to-event data. This article develops a joint modeling approach to a finite mixture of NLME models for longitudinal data and proportional hazard Cox model for time-to-event data, linked by individual latent class indicators, under a Bayesian framework. The proposed joint models and method are applied to a real AIDS clinical trial data set, followed by simulation studies to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and Cox model are fitted separately.
Exploratory Study for Continuous-time Parameter Estimation of Ankle Dynamics
NASA Technical Reports Server (NTRS)
Kukreja, Sunil L.; Boyle, Richard D.
2014-01-01
Recently, a parallel pathway model to describe ankle dynamics was proposed. This model provides a relationship between ankle angle and net ankle torque as the sum of a linear and nonlinear contribution. A technique to identify parameters of this model in discrete-time has been developed. However, these parameters are a nonlinear combination of the continuous-time physiology, making insight into the underlying physiology impossible. The stable and accurate estimation of continuous-time parameters is critical for accurate disease modeling, clinical diagnosis, robotic control strategies, development of optimal exercise protocols for longterm space exploration, sports medicine, etc. This paper explores the development of a system identification technique to estimate the continuous-time parameters of ankle dynamics. The effectiveness of this approach is assessed via simulation of a continuous-time model of ankle dynamics with typical parameters found in clinical studies. The results show that although this technique improves estimates, it does not provide robust estimates of continuous-time parameters of ankle dynamics. Due to this we conclude that alternative modeling strategies and more advanced estimation techniques be considered for future work.
Can a void mimic the Λ in ΛCDM?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundell, Peter; Vilja, Iiro; Mörtsell, Edvard, E-mail: pgsund@utu.fi, E-mail: edvard@fysik.su.se, E-mail: iiro.vilja@utu.fi
2015-08-01
We investigate Lemaítre-Tolman-Bondi (LTB) models, whose early time evolution and bang time are homogeneous and the distance-redshift relation and local Hubble parameter are inherited from the ΛCDM model. We show that the obtained LTB models and the ΛCDM model predict different relative local expansion rates and that the Hubble functions of the models diverge increasingly with redshift. The LTB models show tension between low redshift baryon acoustic oscillation and supernova observations and including Lyman-α forest or cosmic microwave background observations only accentuates the better fit of the ΛCDM model compared to the LTB model. The result indicates that additional degreesmore » of freedom are needed to explain the observations, for example by renouncing spherical symmetry, homogeneous bang time, negligible effects of pressure, or the early time homogeneity assumption.« less
A k-epsilon modeling of near wall turbulence
NASA Technical Reports Server (NTRS)
Yang, Z.; Shih, T. H.
1991-01-01
A k-epsilon model is proposed for turbulent bounded flows. In this model, the turbulent velocity scale and turbulent time scale are used to define the eddy viscosity. The time scale is shown to be bounded from below by the Kolmogorov time scale. The dissipation equation is reformulated using the time scale, removing the need to introduce the pseudo-dissipation. A damping function is chosen such that the shear stress satisfies the near wall asymptotic behavior. The model constants used are the same as the model constants in the commonly used high turbulent Reynolds number k-epsilon model. Fully developed turbulent channel flows and turbulent boundary layer flows over a flat plate at various Reynolds numbers are used to validate the model. The model predictions were found to be in good agreement with the direct numerical simulation data.
Anomalous Transport in Single-Well Push-Pull Tracer Tests
NASA Astrophysics Data System (ADS)
Chen, K.; Zhan, H.
2016-12-01
Single-Well Push-Pull (SWPP) tracer test was conducted to estimate the hydraulic and transport properties in a fractured aquifer at Newark basin. A common phenomenon observed for a set of SWPP tests with different incubation time is the heavy tailing of breakthrough curve (BTC) in pumping phase. A novel model with fractional-in-time and (or) -space was developed to interpret the anomalous transport behavior in SWPP tests. The fractal models, including fractional-in-time, fractional-in-space and fractional-in-time-and-space, were solved in radial coordinate using implicit Euler method. A semi-analytical solution of the mobile-immobile (MIM) model was derived as well for comparison purpose. It is found that the fractional-in-space and fractional-in-time-and-space models match the experimental data well. The BTC of the MIM model drops slower than that of the fractional-in-space model at the beginning of pumping and drops much faster at late time. The best match of fractional-in-space model with the experimental data demonstrates that the non-local transport in space plays an important role in SWPP tests conducted in fractured aquifers.
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains
Meyer, Denny; Forbes, Don; Clarke, Stephen R.
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key Points A comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition. The Markov assumption appears to be valid. However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play. Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes. PMID:24357946
Statistical Analysis of Notational AFL Data Using Continuous Time Markov Chains.
Meyer, Denny; Forbes, Don; Clarke, Stephen R
2006-01-01
Animal biologists commonly use continuous time Markov chain models to describe patterns of animal behaviour. In this paper we consider the use of these models for describing AFL football. In particular we test the assumptions for continuous time Markov chain models (CTMCs), with time, distance and speed values associated with each transition. Using a simple event categorisation it is found that a semi-Markov chain model is appropriate for this data. This validates the use of Markov Chains for future studies in which the outcomes of AFL matches are simulated. Key PointsA comparison of four AFL matches suggests similarity in terms of transition probabilities for events and the mean times, distances and speeds associated with each transition.The Markov assumption appears to be valid.However, the speed, time and distance distributions associated with each transition are not exponential suggesting that semi-Markov model can be used to model and simulate play.Team identified events and directions associated with transitions are required to develop the model into a tool for the prediction of match outcomes.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Connor W. Barth; Susan E. Meyer; Julie Beckstead; Phil S. Allen
2015-01-01
Population-based threshold models using hydrothermal time (HTT) have been widely used to model seed germination. We used HTT to model conidial germination and mycelial growth for the seed pathogen Pyrenophora semeniperda in a novel approach to understanding its interactions with host seeds. Germination time courses and mycelial growth rates for P.semeniperda were...
NASA Astrophysics Data System (ADS)
Chang, Ailian; Sun, HongGuang; Zheng, Chunmiao; Lu, Bingqing; Lu, Chengpeng; Ma, Rui; Zhang, Yong
2018-07-01
Fractional-derivative models have been developed recently to interpret various hydrologic dynamics, such as dissolved contaminant transport in groundwater. However, they have not been applied to quantify other fluid dynamics, such as gas transport through complex geological media. This study reviewed previous gas transport experiments conducted in laboratory columns and real-world oil-gas reservoirs and found that gas dynamics exhibit typical sub-diffusive behavior characterized by heavy late-time tailing in the gas breakthrough curves (BTCs), which cannot be effectively captured by classical transport models. Numerical tests and field applications of the time fractional convection-diffusion equation (fCDE) have shown that the fCDE model can capture the observed gas BTCs including their apparent positive skewness. Sensitivity analysis further revealed that the three parameters used in the fCDE model, including the time index, the convection velocity, and the diffusion coefficient, play different roles in interpreting the delayed gas transport dynamics. In addition, the model comparison and analysis showed that the time fCDE model is efficient in application. Therefore, the time fractional-derivative models can be conveniently extended to quantify gas transport through natural geological media such as complex oil-gas reservoirs.
Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J
2017-12-01
Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
Ngwa, Julius S; Cabral, Howard J; Cheng, Debbie M; Pencina, Michael J; Gagnon, David R; LaValley, Michael P; Cupples, L Adrienne
2016-11-03
Typical survival studies follow individuals to an event and measure explanatory variables for that event, sometimes repeatedly over the course of follow up. The Cox regression model has been used widely in the analyses of time to diagnosis or death from disease. The associations between the survival outcome and time dependent measures may be biased unless they are modeled appropriately. In this paper we explore the Time Dependent Cox Regression Model (TDCM), which quantifies the effect of repeated measures of covariates in the analysis of time to event data. This model is commonly used in biomedical research but sometimes does not explicitly adjust for the times at which time dependent explanatory variables are measured. This approach can yield different estimates of association compared to a model that adjusts for these times. In order to address the question of how different these estimates are from a statistical perspective, we compare the TDCM to Pooled Logistic Regression (PLR) and Cross Sectional Pooling (CSP), considering models that adjust and do not adjust for time in PLR and CSP. In a series of simulations we found that time adjusted CSP provided identical results to the TDCM while the PLR showed larger parameter estimates compared to the time adjusted CSP and the TDCM in scenarios with high event rates. We also observed upwardly biased estimates in the unadjusted CSP and unadjusted PLR methods. The time adjusted PLR had a positive bias in the time dependent Age effect with reduced bias when the event rate is low. The PLR methods showed a negative bias in the Sex effect, a subject level covariate, when compared to the other methods. The Cox models yielded reliable estimates for the Sex effect in all scenarios considered. We conclude that survival analyses that explicitly account in the statistical model for the times at which time dependent covariates are measured provide more reliable estimates compared to unadjusted analyses. We present results from the Framingham Heart Study in which lipid measurements and myocardial infarction data events were collected over a period of 26 years.
Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions
NASA Astrophysics Data System (ADS)
Soltani, S. S.; Cvetkovic, V.; Destouni, G.
2017-12-01
The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.
Hybrid model for forecasting time series with trend, seasonal and salendar variation patterns
NASA Astrophysics Data System (ADS)
Suhartono; Rahayu, S. P.; Prastyo, D. D.; Wijayanti, D. G. P.; Juliyanto
2017-09-01
Most of the monthly time series data in economics and business in Indonesia and other Moslem countries not only contain trend and seasonal, but also affected by two types of calendar variation effects, i.e. the effect of the number of working days or trading and holiday effects. The purpose of this research is to develop a hybrid model or a combination of several forecasting models to predict time series that contain trend, seasonal and calendar variation patterns. This hybrid model is a combination of classical models (namely time series regression and ARIMA model) and/or modern methods (artificial intelligence method, i.e. Artificial Neural Networks). A simulation study was used to show that the proposed procedure for building the hybrid model could work well for forecasting time series with trend, seasonal and calendar variation patterns. Furthermore, the proposed hybrid model is applied for forecasting real data, i.e. monthly data about inflow and outflow of currency at Bank Indonesia. The results show that the hybrid model tend to provide more accurate forecasts than individual forecasting models. Moreover, this result is also in line with the third results of the M3 competition, i.e. the hybrid model on average provides a more accurate forecast than the individual model.
Chen, Feng; Chen, Suren; Ma, Xiaoxiang
2016-01-01
Traffic and environmental conditions (e.g., weather conditions), which frequently change with time, have a significant impact on crash occurrence. Traditional crash frequency models with large temporal scales and aggregated variables are not sufficient to capture the time-varying nature of driving environmental factors, causing significant loss of critical information on crash frequency modeling. This paper aims at developing crash frequency models with refined temporal scales for complex driving environments, with such an effort providing more detailed and accurate crash risk information which can allow for more effective and proactive traffic management and law enforcement intervention. Zero-inflated, negative binomial (ZINB) models with site-specific random effects are developed with unbalanced panel data to analyze hourly crash frequency on highway segments. The real-time driving environment information, including traffic, weather and road surface condition data, sourced primarily from the Road Weather Information System, is incorporated into the models along with site-specific road characteristics. The estimation results of unbalanced panel data ZINB models suggest there are a number of factors influencing crash frequency, including time-varying factors (e.g., visibility and hourly traffic volume) and site-varying factors (e.g., speed limit). The study confirms the unique significance of the real-time weather, road surface condition and traffic data to crash frequency modeling. PMID:27322306
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.
The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less
Ballard, Sanford; Hipp, James R.; Begnaud, Michael L.; ...
2016-10-11
The task of monitoring the Earth for nuclear explosions relies heavily on seismic data to detect, locate, and characterize suspected nuclear tests. In this study, motivated by the need to locate suspected explosions as accurately and precisely as possible, we developed a tomographic model of the compressional wave slowness in the Earth’s mantle with primary focus on the accuracy and precision of travel-time predictions for P and Pn ray paths through the model. Path-dependent travel-time prediction uncertainties are obtained by computing the full 3D model covariance matrix and then integrating slowness variance and covariance along ray paths from source tomore » receiver. Path-dependent travel-time prediction uncertainties reflect the amount of seismic data that was used in tomography with very low values for paths represented by abundant data in the tomographic data set and very high values for paths through portions of the model that were poorly sampled by the tomography data set. The pattern of travel-time prediction uncertainty is a direct result of the off-diagonal terms of the model covariance matrix and underscores the importance of incorporating the full model covariance matrix in the determination of travel-time prediction uncertainty. In addition, the computed pattern of uncertainty differs significantly from that of 1D distance-dependent travel-time uncertainties computed using traditional methods, which are only appropriate for use with travel times computed through 1D velocity models.« less
Modified dwell time optimization model and its applications in subaperture polishing.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2014-05-20
The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200 mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920 mm Zerodur paraboloid and Φ100 mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8 nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.
NASA Astrophysics Data System (ADS)
Syam, Nur Syamsi; Maeng, Seongjin; Kim, Myo Gwang; Lim, Soo Yeon; Lee, Sang Hoon
2018-05-01
A large dead time of a Geiger Mueller (GM) detector may cause a large count loss in radiation measurements and consequently may cause distortion of the Poisson statistic of radiation events into a new distribution. The new distribution will have different statistical parameters compared to the original distribution. Therefore, the variance, skewness, and excess kurtosis in association with the observed count rate of the time interval distribution for well-known nonparalyzable, paralyzable, and nonparalyzable-paralyzable hybrid dead time models of a Geiger Mueller detector were studied using Monte Carlo simulation (GMSIM). These parameters were then compared with the statistical parameters of a perfect detector to observe the change in the distribution. The results show that the behaviors of the statistical parameters for the three dead time models were different. The values of the skewness and the excess kurtosis of the nonparalyzable model are equal or very close to those of the perfect detector, which are ≅2 for skewness, and ≅6 for excess kurtosis, while the statistical parameters in the paralyzable and hybrid model obtain minimum values that occur around the maximum observed count rates. The different trends of the three models resulting from the GMSIM simulation can be used to distinguish the dead time behavior of a GM counter; i.e. whether the GM counter can be described best by using the nonparalyzable, paralyzable, or hybrid model. In a future study, these statistical parameters need to be analyzed further to determine the possibility of using them to determine a dead time for each model, particularly for paralyzable and hybrid models.
Avian seasonal productivity is often modeled as a time-limited stochastic process. Many mathematical formulations have been proposed, including individual based models, continuous-time differential equations, and discrete Markov models. All such models typically include paramete...
Runoff forecasting using a Takagi-Sugeno neuro-fuzzy model with online learning
NASA Astrophysics Data System (ADS)
Talei, Amin; Chua, Lloyd Hock Chye; Quek, Chai; Jansson, Per-Erik
2013-04-01
SummaryA study using local learning Neuro-Fuzzy System (NFS) was undertaken for a rainfall-runoff modeling application. The local learning model was first tested on three different catchments: an outdoor experimental catchment measuring 25 m2 (Catchment 1), a small urban catchment 5.6 km2 in size (Catchment 2), and a large rural watershed with area of 241.3 km2 (Catchment 3). The results obtained from the local learning model were comparable or better than results obtained from physically-based, i.e. Kinematic Wave Model (KWM), Storm Water Management Model (SWMM), and Hydrologiska Byråns Vattenbalansavdelning (HBV) model. The local learning algorithm also required a shorter training time compared to a global learning NFS model. The local learning model was next tested in real-time mode, where the model was continuously adapted when presented with current information in real time. The real-time implementation of the local learning model gave better results, without the need for retraining, when compared to a batch NFS model, where it was found that the batch model had to be retrained periodically in order to achieve similar results.
A STUDY ON TEMPORAL DISTRIBUTION OF FREIGHT TRANSPORTATION IN CONSIDERATION OF DAILY WORK-LIFE CYCLE
NASA Astrophysics Data System (ADS)
Kitaoka, Daiki; Hara, Hidetaka; Oeda, Yoshinao; Sumi, Tomonori
As advanced freight service is demanded, the time related requirements fo r freight transportation becomes more and more significant. This study, focusing on temporal distribution of freight transportation responding to the travel time, developed a shipment departure time decision model for each item, aiming at quantitatively grasping social requirement in the time domain. The model takes account of the daily work cycle of both work cy cles of shippers and carriers along with the travel time. The proposed model has a similar structure as that derived from the previous studies taking account of the daily living cycle of individuals. This model properly reproduced temporal distribution of shipment departure time that changes depending on the length of necessary lead time for each item.
Continuous-time discrete-space models for animal movement
Hanks, Ephraim M.; Hooten, Mevin B.; Alldredge, Mat W.
2015-01-01
The processes influencing animal movement and resource selection are complex and varied. Past efforts to model behavioral changes over time used Bayesian statistical models with variable parameter space, such as reversible-jump Markov chain Monte Carlo approaches, which are computationally demanding and inaccessible to many practitioners. We present a continuous-time discrete-space (CTDS) model of animal movement that can be fit using standard generalized linear modeling (GLM) methods. This CTDS approach allows for the joint modeling of location-based as well as directional drivers of movement. Changing behavior over time is modeled using a varying-coefficient framework which maintains the computational simplicity of a GLM approach, and variable selection is accomplished using a group lasso penalty. We apply our approach to a study of two mountain lions (Puma concolor) in Colorado, USA.
Sun, Bo; Sunkavalli, Kalyan; Ramamoorthi, Ravi; Belhumeur, Peter N; Nayar, Shree K
2007-01-01
The properties of virtually all real-world materials change with time, causing their bidirectional reflectance distribution functions (BRDFs) to be time varying. However, none of the existing BRDF models and databases take time variation into consideration; they represent the appearance of a material at a single time instance. In this paper, we address the acquisition, analysis, modeling, and rendering of a wide range of time-varying BRDFs (TVBRDFs). We have developed an acquisition system that is capable of sampling a material's BRDF at multiple time instances, with each time sample acquired within 36 sec. We have used this acquisition system to measure the BRDFs of a wide range of time-varying phenomena, which include the drying of various types of paints (watercolor, spray, and oil), the drying of wet rough surfaces (cement, plaster, and fabrics), the accumulation of dusts (household and joint compound) on surfaces, and the melting of materials (chocolate). Analytic BRDF functions are fit to these measurements and the model parameters' variations with time are analyzed. Each category exhibits interesting and sometimes nonintuitive parameter trends. These parameter trends are then used to develop analytic TVBRDF models. The analytic TVBRDF models enable us to apply effects such as paint drying and dust accumulation to arbitrary surfaces and novel materials.
TIME Impact - a new user-friendly tuberculosis (TB) model to inform TB policy decisions.
Houben, R M G J; Lalli, M; Sumner, T; Hamilton, M; Pedrazzoli, D; Bonsu, F; Hippner, P; Pillay, Y; Kimerling, M; Ahmedov, S; Pretorius, C; White, R G
2016-03-24
Tuberculosis (TB) is the leading cause of death from infectious disease worldwide, predominantly affecting low- and middle-income countries (LMICs), where resources are limited. As such, countries need to be able to choose the most efficient interventions for their respective setting. Mathematical models can be valuable tools to inform rational policy decisions and improve resource allocation, but are often unavailable or inaccessible for LMICs, particularly in TB. We developed TIME Impact, a user-friendly TB model that enables local capacity building and strengthens country-specific policy discussions to inform support funding applications at the (sub-)national level (e.g. Ministry of Finance) or to international donors (e.g. the Global Fund to Fight AIDS, Tuberculosis and Malaria).TIME Impact is an epidemiological transmission model nested in TIME, a set of TB modelling tools available for free download within the widely-used Spectrum software. The TIME Impact model reflects key aspects of the natural history of TB, with additional structure for HIV/ART, drug resistance, treatment history and age. TIME Impact enables national TB programmes (NTPs) and other TB policymakers to better understand their own TB epidemic, plan their response, apply for funding and evaluate the implementation of the response.The explicit aim of TIME Impact's user-friendly interface is to enable training of local and international TB experts towards independent use. During application of TIME Impact, close involvement of the NTPs and other local partners also builds critical understanding of the modelling methods, assumptions and limitations inherent to modelling. This is essential to generate broad country-level ownership of the modelling data inputs and results. In turn, it stimulates discussions and a review of the current evidence and assumptions, strengthening the decision-making process in general.TIME Impact has been effectively applied in a variety of settings. In South Africa, it informed the first South African HIV and TB Investment Cases and successfully leveraged additional resources from the National Treasury at a time of austerity. In Ghana, a long-term TIME model-centred interaction with the NTP provided new insights into the local epidemiology and guided resource allocation decisions to improve impact.
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas
2017-01-01
Objectives To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. Methods 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1–4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Results Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to −139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. Conclusions This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. PMID:28093433
Signatures of ecological processes in microbial community time series.
Faust, Karoline; Bauchinger, Franziska; Laroche, Béatrice; de Buyl, Sophie; Lahti, Leo; Washburne, Alex D; Gonze, Didier; Widder, Stefanie
2018-06-28
Growth rates, interactions between community members, stochasticity, and immigration are important drivers of microbial community dynamics. In sequencing data analysis, such as network construction and community model parameterization, we make implicit assumptions about the nature of these drivers and thereby restrict model outcome. Despite apparent risk of methodological bias, the validity of the assumptions is rarely tested, as comprehensive procedures are lacking. Here, we propose a classification scheme to determine the processes that gave rise to the observed time series and to enable better model selection. We implemented a three-step classification scheme in R that first determines whether dependence between successive time steps (temporal structure) is present in the time series and then assesses with a recently developed neutrality test whether interactions between species are required for the dynamics. If the first and second tests confirm the presence of temporal structure and interactions, then parameters for interaction models are estimated. To quantify the importance of temporal structure, we compute the noise-type profile of the community, which ranges from black in case of strong dependency to white in the absence of any dependency. We applied this scheme to simulated time series generated with the Dirichlet-multinomial (DM) distribution, Hubbell's neutral model, the generalized Lotka-Volterra model and its discrete variant (the Ricker model), and a self-organized instability model, as well as to human stool microbiota time series. The noise-type profiles for all but DM data clearly indicated distinctive structures. The neutrality test correctly classified all but DM and neutral time series as non-neutral. The procedure reliably identified time series for which interaction inference was suitable. Both tests were required, as we demonstrated that all structured time series, including those generated with the neutral model, achieved a moderate to high goodness of fit to the Ricker model. We present a fast and robust scheme to classify community structure and to assess the prevalence of interactions directly from microbial time series data. The procedure not only serves to determine ecological drivers of microbial dynamics, but also to guide selection of appropriate community models for prediction and follow-up analysis.
Doubly stochastic Poisson process models for precipitation at fine time-scales
NASA Astrophysics Data System (ADS)
Ramesh, Nadarajah I.; Onof, Christian; Xie, Dichao
2012-09-01
This paper considers a class of stochastic point process models, based on doubly stochastic Poisson processes, in the modelling of rainfall. We examine the application of this class of models, a neglected alternative to the widely-known Poisson cluster models, in the analysis of fine time-scale rainfall intensity. These models are mainly used to analyse tipping-bucket raingauge data from a single site but an extension to multiple sites is illustrated which reveals the potential of this class of models to study the temporal and spatial variability of precipitation at fine time-scales.
NASA AVOSS Fast-Time Models for Aircraft Wake Prediction: User's Guide (APA3.8 and TDP2.1)
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew J.; Limon Duparcmeur, Fanny M.
2016-01-01
NASA's current distribution of fast-time wake vortex decay and transport models includes APA (Version 3.8) and TDP (Version 2.1). This User's Guide provides detailed information on the model inputs, file formats, and model outputs. A brief description of the Memphis 1995, Dallas/Fort Worth 1997, and the Denver 2003 wake vortex datasets is given along with the evaluation of models. A detailed bibliography is provided which includes publications on model development, wake field experiment descriptions, and applications of the fast-time wake vortex models.
NASA Astrophysics Data System (ADS)
Harman, C. J.
2015-12-01
Surface water hydrologic models are increasingly used to analyze the transport of solutes through the landscape, such as nitrate. However, many of these models cannot adequately capture the effect of groundwater flow paths, which can have long travel times and accumulate legacy contaminants, releasing them to streams over decades. If these long lag times are not accounted for, the short-term efficacy of management activities to reduce nitrogen loads may be overestimated. Models that adopt a simple 'well-mixed' assumption, leading to an exponential transit time distribution at steady state, cannot adequately capture the broadly skewed nature of groundwater transit times in typical watersheds. Here I will demonstrate how StorAge Selection functions can be used to capture the long lag times of groundwater in a typical subwatershed-based hydrologic model framework typical of models like SWAT, HSPF, HBV, PRMS and others. These functions can be selected and calibrated to reproduce historical data where available, but can also be fitted to the results of a steady-state groundwater transport model like MODFLOW/MODPATH, allowing those results to directly inform the parameterization of an unsteady surface water model. The long tails of the transit time distribution predicted by the groundwater model can then be completely captured by the surface water model. Examples of this application in the Chesapeake Bay watersheds and elsewhere will be given.
Multi-Step Time Series Forecasting with an Ensemble of Varied Length Mixture Models.
Ouyang, Yicun; Yin, Hujun
2018-05-01
Many real-world problems require modeling and forecasting of time series, such as weather temperature, electricity demand, stock prices and foreign exchange (FX) rates. Often, the tasks involve predicting over a long-term period, e.g. several weeks or months. Most existing time series models are inheritably for one-step prediction, that is, predicting one time point ahead. Multi-step or long-term prediction is difficult and challenging due to the lack of information and uncertainty or error accumulation. The main existing approaches, iterative and independent, either use one-step model recursively or treat the multi-step task as an independent model. They generally perform poorly in practical applications. In this paper, as an extension of the self-organizing mixture autoregressive (AR) model, the varied length mixture (VLM) models are proposed to model and forecast time series over multi-steps. The key idea is to preserve the dependencies between the time points within the prediction horizon. Training data are segmented to various lengths corresponding to various forecasting horizons, and the VLM models are trained in a self-organizing fashion on these segments to capture these dependencies in its component AR models of various predicting horizons. The VLM models form a probabilistic mixture of these varied length models. A combination of short and long VLM models and an ensemble of them are proposed to further enhance the prediction performance. The effectiveness of the proposed methods and their marked improvements over the existing methods are demonstrated through a number of experiments on synthetic data, real-world FX rates and weather temperatures.
Failure time analysis with unobserved heterogeneity: Earthquake duration time of Turkey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ata, Nihal, E-mail: nihalata@hacettepe.edu.tr; Kadilar, Gamze Özel, E-mail: gamzeozl@hacettepe.edu.tr
Failure time models assume that all units are subject to same risks embodied in the hazard functions. In this paper, unobserved sources of heterogeneity that are not captured by covariates are included into the failure time models. Destructive earthquakes in Turkey since 1900 are used to illustrate the models and inter-event time between two consecutive earthquakes are defined as the failure time. The paper demonstrates how seismicity and tectonics/physics parameters that can potentially influence the spatio-temporal variability of earthquakes and presents several advantages compared to more traditional approaches.
Finite State Models of Manned Systems: Validation, Simplification, and Extension.
1979-11-01
model a time set is needed. A time set is some set T together with a binary relation defined on T which linearly orders the set. If "model time" is...discrete, so is T ; continuous time is represented by a set corresponding to a subset of the non-negative real numbers. In the following discussion time...defined as sequences, over time, of input and outIut values. The notion of sequences or trajectories is formalized as: AT = xx: T -- Al BT = tyIy: T -4BJ AT
2013-01-01
29 3.5. ARIMA Models , Temporal Clustering of Conflicts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.6...39 3.9. ARIMA Models ...variance across a distribution. Autoregressive integrated moving average ( ARIMA ) models are used with time-series data sets and are designed to capture
NASA Astrophysics Data System (ADS)
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
The ‘hit’ phenomenon: a mathematical model of human dynamics interactions as a stochastic process
NASA Astrophysics Data System (ADS)
Ishii, Akira; Arakaki, Hisashi; Matsuda, Naoya; Umemura, Sanae; Urushidani, Tamiko; Yamagata, Naoya; Yoshida, Narihiko
2012-06-01
A mathematical model for the ‘hit’ phenomenon in entertainment within a society is presented as a stochastic process of human dynamics interactions. The model uses only the advertisement budget time distribution as an input, and word-of-mouth (WOM), represented by posts on social network systems, is used as data to make a comparison with the calculated results. The unit of time is days. The WOM distribution in time is found to be very close to the revenue distribution in time. Calculations for the Japanese motion picture market based on the mathematical model agree well with the actual revenue distribution in time.
Trend time-series modeling and forecasting with neural networks.
Qi, Min; Zhang, G Peter
2008-05-01
Despite its great importance, there has been no general consensus on how to model the trends in time-series data. Compared to traditional approaches, neural networks (NNs) have shown some promise in time-series forecasting. This paper investigates how to best model trend time series using NNs. Four different strategies (raw data, raw data with time index, detrending, and differencing) are used to model various trend patterns (linear, nonlinear, deterministic, stochastic, and breaking trend). We find that with NNs differencing often gives meritorious results regardless of the underlying data generating processes (DGPs). This finding is also confirmed by the real gross national product (GNP) series.
Modeling of switching regulator power stages with and without zero-inductor-current dwell time
NASA Technical Reports Server (NTRS)
Lee, F. C. Y.; Yu, Y.
1979-01-01
State-space techniques are employed to derive accurate models for the three basic switching converter power stages: buck, boost, and buck/boost operating with and without zero-inductor-current dwell time. A generalized procedure is developed which treats the continuous-inductor-current mode without dwell time as a special case of the discontinuous-current mode when the dwell time vanishes. Abrupt changes of system behavior, including a reduction of the system order when the dwell time appears, are shown both analytically and experimentally. Merits resulting from the present modeling technique in comparison with existing modeling techniques are illustrated.
Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng
2011-01-01
Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...
Time Series Model Identification by Estimating Information.
1982-11-01
principle, Applications of Statistics, P. R. Krishnaiah , ed., North-Holland: Amsterdam, 27-41. Anderson, T. W. (1971). The Statistical Analysis of Time Series...E. (1969). Multiple Time Series Modeling, Multivariate Analysis II, edited by P. Krishnaiah , Academic Press: New York, 389-409. Parzen, E. (1981...Newton, H. J. (1980). Multiple Time Series Modeling, II Multivariate Analysis - V, edited by P. Krishnaiah , North Holland: Amsterdam, 181-197. Shibata, R
Better Modeling of Electrostatic Discharge in an Insulator
NASA Technical Reports Server (NTRS)
Pekov, Mihail
2010-01-01
An improved mathematical model has been developed of the time dependence of buildup or decay of electric charge in a high-resistivity (nominally insulating) material. The model is intended primarily for use in extracting the DC electrical resistivity of such a material from voltage -vs.- current measurements performed repeatedly on a sample of the material over a time comparable to the longest characteristic times (typically of the order of months) that govern the evolution of relevant properties of the material. This model is an alternative to a prior simplistic macroscopic model that yields results differing from the results of the time-dependent measurements by two to three orders of magnitude.
A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.
Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad
2012-01-01
The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.
Modeling of Engine Parameters for Condition-Based Maintenance of the MTU Series 2000 Diesel Engine
2016-09-01
are suitable. To model the behavior of the engine, an autoregressive distributed lag (ARDL) time series model of engine speed and exhaust gas... time series model of engine speed and exhaust gas temperature is derived. The lag length for ARDL is determined by whitening of residuals using the...15 B. REGRESSION ANALYSIS ....................................................................15 1. Time Series Analysis
On the discretization and control of an SEIR epidemic model with a periodic impulsive vaccination
NASA Astrophysics Data System (ADS)
Alonso-Quesada, S.; De la Sen, M.; Ibeas, A.
2017-01-01
This paper deals with the discretization and control of an SEIR epidemic model. Such a model describes the transmission of an infectious disease among a time-varying host population. The model assumes mortality from causes related to the disease. Our study proposes a discretization method including a free-design parameter to be adjusted for guaranteeing the positivity of the resulting discrete-time model. Such a method provides a discrete-time model close to the continuous-time one without the need for the sampling period to be as small as other commonly used discretization methods require. This fact makes possible the design of impulsive vaccination control strategies with less burden of measurements and related computations if one uses the proposed instead of other discretization methods. The proposed discretization method and the impulsive vaccination strategy designed on the resulting discretized model are the main novelties of the paper. The paper includes (i) the analysis of the positivity of the obtained discrete-time SEIR model, (ii) the study of stability of the disease-free equilibrium point of a normalized version of such a discrete-time model and (iii) the existence and the attractivity of a globally asymptotically stable disease-free periodic solution under a periodic impulsive vaccination. Concretely, the exposed and infectious subpopulations asymptotically converge to zero as time tends to infinity while the normalized subpopulations of susceptible and recovered by immunization individuals oscillate in the context of such a solution. Finally, a numerical example illustrates the theoretic results.
Path Flow Estimation Using Time Varying Coefficient State Space Model
NASA Astrophysics Data System (ADS)
Jou, Yow-Jen; Lan, Chien-Lun
2009-08-01
The dynamic path flow information is very crucial in the field of transportation operation and management, i.e., dynamic traffic assignment, scheduling plan, and signal timing. Time-dependent path information, which is important in many aspects, is nearly impossible to be obtained. Consequently, researchers have been seeking estimation methods for deriving valuable path flow information from less expensive traffic data, primarily link traffic counts of surveillance systems. This investigation considers a path flow estimation problem involving the time varying coefficient state space model, Gibbs sampler, and Kalman filter. Numerical examples with part of a real network of the Taipei Mass Rapid Transit with real O-D matrices is demonstrated to address the accuracy of proposed model. Results of this study show that this time-varying coefficient state space model is very effective in the estimation of path flow compared to time-invariant model.
Tu, Rui; Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun
2018-03-29
This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration.
Monitoring and Modeling Performance of Communications in Computational Grids
NASA Technical Reports Server (NTRS)
Frumkin, Michael A.; Le, Thuy T.
2003-01-01
Computational grids may include many machines located in a number of sites. For efficient use of the grid we need to have an ability to estimate the time it takes to communicate data between the machines. For dynamic distributed grids it is unrealistic to know exact parameters of the communication hardware and the current communication traffic and we should rely on a model of the network performance to estimate the message delivery time. Our approach to a construction of such a model is based on observation of the messages delivery time with various message sizes and time scales. We record these observations in a database and use them to build a model of the message delivery time. Our experiments show presence of multiple bands in the logarithm of the message delivery times. These multiple bands represent multiple paths messages travel between the grid machines and are incorporated in our multiband model.
A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes
Ma, Xin; Shen, Jianping
2017-01-01
The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094
Real-Time Simulation of the X-33 Aerospace Engine
NASA Technical Reports Server (NTRS)
Aguilar, Robert
1999-01-01
This paper discusses the development and performance of the X-33 Aerospike Engine RealTime Model. This model was developed for the purposes of control law development, six degree-of-freedom trajectory analysis, vehicle system integration testing, and hardware-in-the loop controller verification. The Real-Time Model uses time-step marching solution of non-linear differential equations representing the physical processes involved in the operation of a liquid propellant rocket engine, albeit in a simplified form. These processes include heat transfer, fluid dynamics, combustion, and turbomachine performance. Two engine models are typically employed in order to accurately model maneuvering and the powerpack-out condition where the power section of one engine is used to supply propellants to both engines if one engine malfunctions. The X-33 Real-Time Model is compared to actual hot fire test data and is been found to be in good agreement.
Clustering of financial time series
NASA Astrophysics Data System (ADS)
D'Urso, Pierpaolo; Cappelli, Carmela; Di Lallo, Dario; Massari, Riccardo
2013-05-01
This paper addresses the topic of classifying financial time series in a fuzzy framework proposing two fuzzy clustering models both based on GARCH models. In general clustering of financial time series, due to their peculiar features, needs the definition of suitable distance measures. At this aim, the first fuzzy clustering model exploits the autoregressive representation of GARCH models and employs, in the framework of a partitioning around medoids algorithm, the classical autoregressive metric. The second fuzzy clustering model, also based on partitioning around medoids algorithm, uses the Caiado distance, a Mahalanobis-like distance, based on estimated GARCH parameters and covariances that takes into account the information about the volatility structure of time series. In order to illustrate the merits of the proposed fuzzy approaches an application to the problem of classifying 29 time series of Euro exchange rates against international currencies is presented and discussed, also comparing the fuzzy models with their crisp version.
Additive mixed effect model for recurrent gap time data.
Ding, Jieli; Sun, Liuquan
2017-04-01
Gap times between recurrent events are often of primary interest in medical and observational studies. The additive hazards model, focusing on risk differences rather than risk ratios, has been widely used in practice. However, the marginal additive hazards model does not take the dependence among gap times into account. In this paper, we propose an additive mixed effect model to analyze gap time data, and the proposed model includes a subject-specific random effect to account for the dependence among the gap times. Estimating equation approaches are developed for parameter estimation, and the asymptotic properties of the resulting estimators are established. In addition, some graphical and numerical procedures are presented for model checking. The finite sample behavior of the proposed methods is evaluated through simulation studies, and an application to a data set from a clinic study on chronic granulomatous disease is provided.
Dynamic Infinite Mixed-Membership Stochastic Blockmodel.
Fan, Xuhui; Cao, Longbing; Xu, Richard Yi Da
2015-09-01
Directional and pairwise measurements are often used to model interactions in a social network setting. The mixed-membership stochastic blockmodel (MMSB) was a seminal work in this area, and its ability has been extended. However, models such as MMSB face particular challenges in modeling dynamic networks, for example, with the unknown number of communities. Accordingly, this paper proposes a dynamic infinite mixed-membership stochastic blockmodel, a generalized framework that extends the existing work to potentially infinite communities inside a network in dynamic settings (i.e., networks are observed over time). Additional model parameters are introduced to reflect the degree of persistence among one's memberships at consecutive time stamps. Under this framework, two specific models, namely mixture time variant and mixture time invariant models, are proposed to depict two different time correlation structures. Two effective posterior sampling strategies and their results are presented, respectively, using synthetic and real-world data.
High-resolution time-frequency representation of EEG data using multi-scale wavelets
NASA Astrophysics Data System (ADS)
Li, Yang; Cui, Wei-Gang; Luo, Mei-Lin; Li, Ke; Wang, Lina
2017-09-01
An efficient time-varying autoregressive (TVAR) modelling scheme that expands the time-varying parameters onto the multi-scale wavelet basis functions is presented for modelling nonstationary signals and with applications to time-frequency analysis (TFA) of electroencephalogram (EEG) signals. In the new parametric modelling framework, the time-dependent parameters of the TVAR model are locally represented by using a novel multi-scale wavelet decomposition scheme, which can allow the capability to capture the smooth trends as well as track the abrupt changes of time-varying parameters simultaneously. A forward orthogonal least square (FOLS) algorithm aided by mutual information criteria are then applied for sparse model term selection and parameter estimation. Two simulation examples illustrate that the performance of the proposed multi-scale wavelet basis functions outperforms the only single-scale wavelet basis functions or Kalman filter algorithm for many nonstationary processes. Furthermore, an application of the proposed method to a real EEG signal demonstrates the new approach can provide highly time-dependent spectral resolution capability.
Time varying default barrier as an agreement rules on bond contract
NASA Astrophysics Data System (ADS)
Maruddani, Di Asih I.; Safitri, Diah; Hoyyi, Abdul
2018-05-01
There are some default time rules on contract agreement of a bond. The classical default time is known as Merton Model. The most important characteristic of Merton’s model is the restriction of default time to the maturity of the debt, not taking into consideration the possibility of an early default. If the firm’s value falls down to minimal level before the maturity of the debt, but it is able to recover and meet the debt’s payment at maturity, the default would be avoided in Merton’ s approach. Merton model has been expanded by Hull & White [6] and Avellaneda & Zhu [1]. They introduced time-varying default barrier for modelling distance to default process. This model use time-varying variable as a barrier. In this paper, we give a valuation of a bond with time-varying default barrier agreement. We use straight forward integration for obtaining equity and liability equation. This theory is applied in Indonesian corporate bond.
Implications of the earthquake cycle for inferring fault locking on the Cascadia megathrust
Pollitz, Fred; Evans, Eileen
2017-01-01
GPS velocity fields in the Western US have been interpreted with various physical models of the lithosphere-asthenosphere system: (1) time-independent block models; (2) time-dependent viscoelastic-cycle models, where deformation is driven by viscoelastic relaxation of the lower crust and upper mantle from past faulting events; (3) viscoelastic block models, a time-dependent variation of the block model. All three models are generally driven by a combination of loading on locked faults and (aseismic) fault creep. Here we construct viscoelastic block models and viscoelastic-cycle models for the Western US, focusing on the Pacific Northwest and the earthquake cycle on the Cascadia megathrust. In the viscoelastic block model, the western US is divided into blocks selected from an initial set of 137 microplates using the method of Total Variation Regularization, allowing potential trade-offs between faulting and megathrust coupling to be determined algorithmically from GPS observations. Fault geometry, slip rate, and locking rates (i.e. the locking fraction times the long term slip rate) are estimated simultaneously within the TVR block model. For a range of mantle asthenosphere viscosity (4.4 × 1018 to 3.6 × 1020 Pa s) we find that fault locking on the megathrust is concentrated in the uppermost 20 km in depth, and a locking rate contour line of 30 mm yr−1 extends deepest beneath the Olympic Peninsula, characteristics similar to previous time-independent block model results. These results are corroborated by viscoelastic-cycle modelling. The average locking rate required to fit the GPS velocity field depends on mantle viscosity, being higher the lower the viscosity. Moreover, for viscosity ≲ 1020 Pa s, the amount of inferred locking is higher than that obtained using a time-independent block model. This suggests that time-dependent models for a range of admissible viscosity structures could refine our knowledge of the locking distribution and its epistemic uncertainty.
NASA Astrophysics Data System (ADS)
Zhou, H. W.; Yi, H. Y.; Mishnaevsky, L.; Wang, R.; Duan, Z. Q.; Chen, Q.
2017-05-01
A modeling approach to time-dependent property of Glass Fiber Reinforced Polymers (GFRP) composites is of special interest for quantitative description of long-term behavior. An electronic creep machine is employed to investigate the time-dependent deformation of four specimens of dog-bond-shaped GFRP composites at various stress level. A negative exponent function based on structural changes is introduced to describe the damage evolution of material properties in the process of creep test. Accordingly, a new creep constitutive equation, referred to fractional derivative Maxwell model, is suggested to characterize the time-dependent behavior of GFRP composites by replacing Newtonian dashpot with the Abel dashpot in the classical Maxwell model. The analytic solution for the fractional derivative Maxwell model is given and the relative parameters are determined. The results estimated by the fractional derivative Maxwell model proposed in the paper are in a good agreement with the experimental data. It is shown that the new creep constitutive model proposed in the paper needs few parameters to represent various time-dependent behaviors.
RADON CONCENTRATION TIME SERIES MODELING AND APPLICATION DISCUSSION.
Stránský, V; Thinová, L
2017-11-01
In the year 2010 a continual radon measurement was established at Mladeč Caves in the Czech Republic using a continual radon monitor RADIM3A. In order to model radon time series in the years 2010-15, the Box-Jenkins Methodology, often used in econometrics, was applied. Because of the behavior of radon concentrations (RCs), a seasonal integrated, autoregressive moving averages model with exogenous variables (SARIMAX) has been chosen to model the measured time series. This model uses the time series seasonality, previously acquired values and delayed atmospheric parameters, to forecast RC. The developed model for RC time series is called regARIMA(5,1,3). Model residuals could be retrospectively compared with seismic evidence of local or global earthquakes, which occurred during the RCs measurement. This technique enables us to asses if continuously measured RC could serve an earthquake precursor. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Lambert, Amaury; Stadler, Tanja
2013-12-01
Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.
Application of troposphere model from NWP and GNSS data into real-time precise positioning
NASA Astrophysics Data System (ADS)
Wilgan, Karina; Hadas, Tomasz; Kazmierski, Kamil; Rohm, Witold; Bosy, Jaroslaw
2016-04-01
The tropospheric delay empirical models are usually functions of meteorological parameters (temperature, pressure and humidity). The application of standard atmosphere parameters or global models, such as GPT (global pressure/temperature) model or UNB3 (University of New Brunswick, version 3) model, may not be sufficient, especially for positioning in non-standard weather conditions. The possible solution is to use regional troposphere models based on real-time or near-real time measurements. We implement a regional troposphere model into the PPP (Precise Point Positioning) software GNSS-WARP (Wroclaw Algorithms for Real-time Positioning) developed at Wroclaw University of Environmental and Life Sciences. The software is capable of processing static and kinematic multi-GNSS data in real-time and post-processing mode and takes advantage of final IGS (International GNSS Service) products as well as IGS RTS (Real-Time Service) products. A shortcoming of PPP technique is the time required for the solution to converge. One of the reasons is the high correlation among the estimated parameters: troposphere delay, receiver clock offset and receiver height. To efficiently decorrelate these parameters, a significant change in satellite geometry is required. Alternative solution is to introduce the external high-quality regional troposphere delay model to constrain troposphere estimates. The proposed model consists of zenith total delays (ZTD) and mapping functions calculated from meteorological parameters from Numerical Weather Prediction model WRF (Weather Research and Forecasting) and ZTDs from ground-based GNSS stations using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zurich.
NASA Astrophysics Data System (ADS)
Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen
2017-03-01
Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.
Time-varying parameter models for catchments with land use change: the importance of model structure
NASA Astrophysics Data System (ADS)
Pathiraja, Sahani; Anghileri, Daniela; Burlando, Paolo; Sharma, Ashish; Marshall, Lucy; Moradkhani, Hamid
2018-05-01
Rapid population and economic growth in Southeast Asia has been accompanied by extensive land use change with consequent impacts on catchment hydrology. Modeling methodologies capable of handling changing land use conditions are therefore becoming ever more important and are receiving increasing attention from hydrologists. A recently developed data-assimilation-based framework that allows model parameters to vary through time in response to signals of change in observations is considered for a medium-sized catchment (2880 km2) in northern Vietnam experiencing substantial but gradual land cover change. We investigate the efficacy of the method as well as the importance of the chosen model structure in ensuring the success of a time-varying parameter method. The method was used with two lumped daily conceptual models (HBV and HyMOD) that gave good-quality streamflow predictions during pre-change conditions. Although both time-varying parameter models gave improved streamflow predictions under changed conditions compared to the time-invariant parameter model, persistent biases for low flows were apparent in the HyMOD case. It was found that HyMOD was not suited to representing the modified baseflow conditions, resulting in extreme and unrealistic time-varying parameter estimates. This work shows that the chosen model can be critical for ensuring the time-varying parameter framework successfully models streamflow under changing land cover conditions. It can also be used to determine whether land cover changes (and not just meteorological factors) contribute to the observed hydrologic changes in retrospective studies where the lack of a paired control catchment precludes such an assessment.
Cross-Scale Modelling of Subduction from Minute to Million of Years Time Scale
NASA Astrophysics Data System (ADS)
Sobolev, S. V.; Muldashev, I. A.
2015-12-01
Subduction is an essentially multi-scale process with time-scales spanning from geological to earthquake scale with the seismic cycle in-between. Modelling of such process constitutes one of the largest challenges in geodynamic modelling today.Here we present a cross-scale thermomechanical model capable of simulating the entire subduction process from rupture (1 min) to geological time (millions of years) that employs elasticity, mineral-physics-constrained non-linear transient viscous rheology and rate-and-state friction plasticity. The model generates spontaneous earthquake sequences. The adaptive time-step algorithm recognizes moment of instability and drops the integration time step to its minimum value of 40 sec during the earthquake. The time step is then gradually increased to its maximal value of 5 yr, following decreasing displacement rates during the postseismic relaxation. Efficient implementation of numerical techniques allows long-term simulations with total time of millions of years. This technique allows to follow in details deformation process during the entire seismic cycle and multiple seismic cycles. We observe various deformation patterns during modelled seismic cycle that are consistent with surface GPS observations and demonstrate that, contrary to the conventional ideas, the postseismic deformation may be controlled by viscoelastic relaxation in the mantle wedge, starting within only a few hours after the great (M>9) earthquakes. Interestingly, in our model an average slip velocity at the fault closely follows hyperbolic decay law. In natural observations, such deformation is interpreted as an afterslip, while in our model it is caused by the viscoelastic relaxation of mantle wedge with viscosity strongly varying with time. We demonstrate that our results are consistent with the postseismic surface displacement after the Great Tohoku Earthquake for the day-to-year time range. We will also present results of the modeling of deformation of the upper plate during multiple earthquake cycles at times of hundred thousand and million years and discuss effect of great earthquakes in changing long-term stress field in the upper plate.
Zhang, Yong; Green, Christopher T.; Baeumer, Boris
2014-01-01
Time-nonlocal transport models can describe non-Fickian diffusion observed in geological media, but the physical meaning of parameters can be ambiguous, and most applications are limited to curve-fitting. This study explores methods for predicting the parameters of a temporally tempered Lévy motion (TTLM) model for transient sub-diffusion in mobile–immobile like alluvial settings represented by high-resolution hydrofacies models. The TTLM model is a concise multi-rate mass transfer (MRMT) model that describes a linear mass transfer process where the transfer kinetics and late-time transport behavior are controlled by properties of the host medium, especially the immobile domain. The intrinsic connection between the MRMT and TTLM models helps to estimate the main time-nonlocal parameters in the TTLM model (which are the time scale index, the capacity coefficient, and the truncation parameter) either semi-analytically or empirically from the measurable aquifer properties. Further applications show that the TTLM model captures the observed solute snapshots, the breakthrough curves, and the spatial moments of plumes up to the fourth order. Most importantly, the a priori estimation of the time-nonlocal parameters outside of any breakthrough fitting procedure provides a reliable “blind” prediction of the late-time dynamics of subdiffusion observed in a spectrum of alluvial settings. Predictability of the time-nonlocal parameters may be due to the fact that the late-time subdiffusion is not affected by the exact location of each immobile zone, but rather is controlled by the time spent in immobile blocks surrounding the pathway of solute particles. Results also show that the effective dispersion coefficient has to be fitted due to the scale effect of transport, and the mean velocity can differ from local measurements or volume averages. The link between medium heterogeneity and time-nonlocal parameters will help to improve model predictability for non-Fickian transport in alluvial settings.
Atomic temporal interval relations in branching time: calculation and application
NASA Astrophysics Data System (ADS)
Anger, Frank D.; Ladkin, Peter B.; Rodriguez, Rita V.
1991-03-01
A practical method of reasoning about intervals in a branching-time model which is dense, unbounded, future-branching, without rejoining branches is presented. The discussion is based on heuristic constraint- propagation techniques using the relation algebra of binary temporal relations among the intervals over the branching-time model. This technique has been applied with success to models of intervals over linear time by Allen and others, and is of cubic-time complexity. To extend it to branding-time models, it is necessary to calculate compositions of the relations; thus, the table of compositions for the 'atomic' relations is computed, enabling the rapid determination of the composition of arbitrary relations, expressed as disjunctions or unions of the atomic relations.
NASA Astrophysics Data System (ADS)
Lismawati, Eka; Respatiwulan; Widyaningsih, Purnami
2017-06-01
The SIS epidemic model describes the pattern of disease spread with characteristics that recovered individuals can be infected more than once. The number of susceptible and infected individuals every time follows the discrete time Markov process. It can be represented by the discrete time Markov chains (DTMC) SIS. The DTMC SIS epidemic model can be developed for two pathogens in two patches. The aims of this paper are to reconstruct and to apply the DTMC SIS epidemic model with two pathogens in two patches. The model was presented as transition probabilities. The application of the model obtain that the number of susceptible individuals decreases while the number of infected individuals increases for each pathogen in each patch.
A note on tilted Bianchi type VIh models: the type III bifurcation
NASA Astrophysics Data System (ADS)
Coley, A. A.; Hervik, S.
2008-10-01
In this note we complete the analysis of Hervik, van den Hoogen, Lim and Coley (2007 Class. Quantum Grav. 24 3859) of the late-time behaviour of tilted perfect fluid Bianchi type III models. We consider models with dust, and perfect fluids stiffer than dust, and eludicate the late-time behaviour by studying the centre manifold which dominates the behaviour of the model at late times. In the dust case, this centre manifold is three-dimensional and can be considered a double bifurcation as the two parameters (h and γ) of the type VIh model are varied. We therefore complete the analysis of the late-time behaviour of tilted ever-expanding Bianchi models of types I VIII.
ARMA models for earthquake ground motions. Seismic safety margins research program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, M. K.; Kwiatkowski, J. W.; Nau, R. F.
1981-02-01
Four major California earthquake records were analyzed by use of a class of discrete linear time-domain processes commonly referred to as ARMA (Autoregressive/Moving-Average) models. It was possible to analyze these different earthquakes, identify the order of the appropriate ARMA model(s), estimate parameters, and test the residuals generated by these models. It was also possible to show the connections, similarities, and differences between the traditional continuous models (with parameter estimates based on spectral analyses) and the discrete models with parameters estimated by various maximum-likelihood techniques applied to digitized acceleration data in the time domain. The methodology proposed is suitable for simulatingmore » earthquake ground motions in the time domain, and appears to be easily adapted to serve as inputs for nonlinear discrete time models of structural motions. 60 references, 19 figures, 9 tables.« less
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
NASA Astrophysics Data System (ADS)
Cash, M. D.; Singer, H. J.; Millward, G. H.; Balch, C. C.; Toth, G.; Welling, D. T.
2017-12-01
In October 2016, the first version of the Geospace model was transitioned into real-time operations at NOAA Space Weather Prediction Center (SWPC). The Geospace model is a part of the Space Weather Modeling Framework (SWMF) developed at the University of Michigan, and the model simulates the full time-dependent 3D Geospace environment (Earth's magnetosphere, ring current and ionosphere) and predicts global space weather parameters such as induced magnetic perturbations in space and on Earth's surface. The current version of the Geospace model uses three coupled components of SWMF: the BATS-R-US global magnetosphere model, the Rice Convection Model (RCM) of the inner magnetosphere, and the Ridley Ionosphere electrodynamics Model (RIM). In the operational mode, SWMF/Geospace runs continually in real-time as long as there is new solar wind data arriving from a satellite at L1, either DSCOVR or ACE. We present an analysis of the overall performance of the Geospace model during the first year of real-time operations. Evaluation metrics include Kp, Dst, as well as regional magnetometer stations. We will also present initial results from new products, such as the AE index, available with the recent upgrade to the Geospace model.
Yang, Xilin; Kong, Alice PS; Luk, Andrea OY; Ozaki, Risa; Ko, Gary TC; Ma, Ronald CW; Chan, Juliana CN; So, Wing Yee
2014-01-01
Background Pharmacoepidemiologic analysis can confirm whether drug efficacy in a randomized controlled trial (RCT) translates to effectiveness in real settings. We examined methods used to control for immortal time bias in an analysis of renin–angiotensin system (RAS) inhibitors as the reference cardioprotective drug. Methods We analyzed data from 3928 patients with type 2 diabetes who were recruited into the Hong Kong Diabetes Registry between 1996 and 2005 and followed up to July 30, 2005. Different Cox models were used to obtain hazard ratios (HRs) for cardiovascular disease (CVD) associated with RAS inhibitors. These HRs were then compared to the HR of 0.92 reported in a recent meta-analysis of RCTs. Results During a median follow-up period of 5.45 years, 7.23% (n = 284) patients developed CVD and 38.7% (n = 1519) were started on RAS inhibitors, with 39.1% of immortal time among the users. In multivariable analysis, time-dependent drug-exposure Cox models and Cox models that moved immortal time from users to nonusers both severely inflated the HR, and time-fixed models that included immortal time deflated the HR. Use of time-fixed Cox models that excluded immortal time resulted in a HR of only 0.89 (95% CI, 0.68–1.17) for CVD associated with RAS inhibitors, which is closer to the values reported in RCTs. Conclusions In pharmacoepidemiologic analysis, time-dependent drug exposure models and models that move immortal time from users to nonusers may introduce substantial bias in investigations of the effects of RAS inhibitors on CVD in type 2 diabetes. PMID:24747198
Rutter, Carolyn M; Knudsen, Amy B; Marsh, Tracey L; Doria-Rose, V Paul; Johnson, Eric; Pabiniak, Chester; Kuntz, Karen M; van Ballegooijen, Marjolein; Zauber, Ann G; Lansdorp-Vogelaar, Iris
2016-07-01
Microsimulation models synthesize evidence about disease processes and interventions, providing a method for predicting long-term benefits and harms of prevention, screening, and treatment strategies. Because models often require assumptions about unobservable processes, assessing a model's predictive accuracy is important. We validated 3 colorectal cancer (CRC) microsimulation models against outcomes from the United Kingdom Flexible Sigmoidoscopy Screening (UKFSS) Trial, a randomized controlled trial that examined the effectiveness of one-time flexible sigmoidoscopy screening to reduce CRC mortality. The models incorporate different assumptions about the time from adenoma initiation to development of preclinical and symptomatic CRC. Analyses compare model predictions to study estimates across a range of outcomes to provide insight into the accuracy of model assumptions. All 3 models accurately predicted the relative reduction in CRC mortality 10 years after screening (predicted hazard ratios, with 95% percentile intervals: 0.56 [0.44, 0.71], 0.63 [0.51, 0.75], 0.68 [0.53, 0.83]; estimated with 95% confidence interval: 0.56 [0.45, 0.69]). Two models with longer average preclinical duration accurately predicted the relative reduction in 10-year CRC incidence. Two models with longer mean sojourn time accurately predicted the number of screen-detected cancers. All 3 models predicted too many proximal adenomas among patients referred to colonoscopy. Model accuracy can only be established through external validation. Analyses such as these are therefore essential for any decision model. Results supported the assumptions that the average time from adenoma initiation to development of preclinical cancer is long (up to 25 years), and mean sojourn time is close to 4 years, suggesting the window for early detection and intervention by screening is relatively long. Variation in dwell time remains uncertain and could have important clinical and policy implications. © The Author(s) 2016.
Kargarian-Marvasti, Sadegh; Rimaz, Shahnaz; Abolghasemi, Jamileh; Heydari, Iraj
2017-01-01
Cox proportional hazard model is the most common method for analyzing the effects of several variables on survival time. However, under certain circumstances, parametric models give more precise estimates to analyze survival data than Cox. The purpose of this study was to investigate the comparative performance of Cox and parametric models in a survival analysis of factors affecting the event time of neuropathy in patients with type 2 diabetes. This study included 371 patients with type 2 diabetes without neuropathy who were registered at Fereydunshahr diabetes clinic. Subjects were followed up for the development of neuropathy between 2006 to March 2016. To investigate the factors influencing the event time of neuropathy, significant variables in univariate model ( P < 0.20) were entered into the multivariate Cox and parametric models ( P < 0.05). In addition, Akaike information criterion (AIC) and area under ROC curves were used to evaluate the relative goodness of fitted model and the efficiency of each procedure, respectively. Statistical computing was performed using R software version 3.2.3 (UNIX platforms, Windows and MacOS). Using Kaplan-Meier, survival time of neuropathy was computed 76.6 ± 5 months after initial diagnosis of diabetes. After multivariate analysis of Cox and parametric models, ethnicity, high-density lipoprotein and family history of diabetes were identified as predictors of event time of neuropathy ( P < 0.05). According to AIC, "log-normal" model with the lowest Akaike's was the best-fitted model among Cox and parametric models. According to the results of comparison of survival receiver operating characteristics curves, log-normal model was considered as the most efficient and fitted model.
Hartzell, S.; Guatteri, Mariagiovanna; Mai, P.M.; Liu, P.-C.; Fisk, M. R.
2005-01-01
In the evolution of methods for calculating synthetic time histories of ground motion for postulated earthquakes, kinematic source models have dominated to date because of their ease of application. Dynamic models, however, which incorporate a physical relationship between important faulting parameters of stress drop, slip, rupture velocity, and rise time, are becoming more accessible. This article compares a class of kinematic models based on the summation of a fractal distribution of subevent sizes with a dynamic model based on the slip-weakening friction law. Kinematic modeling is done for the frequency band 0.2 to 10.0. Hz, dynamic models are calculated from 0.2 to 2.0. Hz. The strong motion data set for the 1994 Northridge earthquake is used to evaluate and compare the synthetic time histories. Source models are propagated to the far field by convolution with 1D and 3D theoretical Green’s functions. In addition, the kinematic model is used to evaluate the importance of propagation path effects: velocity structure, scattering, and nonlinearity. At present, the kinematic model gives a better broadband fit to the Northridge ground motion than the simple slip-weakening dynamic model. In general, the dynamic model overpredicts rise times and produces insufficient shorter-period energy. Within the context of the slip-weakening model, the Northridge ground motion requires a short slip-weakening distance, on the order of 0.15 m or less. A more complex dynamic model including rate weakening or one that allows shorter rise times near the hypocenter may fit the data better.
Applying the multivariate time-rescaling theorem to neural population models
Gerhard, Felipe; Haslinger, Robert; Pipa, Gordon
2011-01-01
Statistical models of neural activity are integral to modern neuroscience. Recently, interest has grown in modeling the spiking activity of populations of simultaneously recorded neurons to study the effects of correlations and functional connectivity on neural information processing. However any statistical model must be validated by an appropriate goodness-of-fit test. Kolmogorov-Smirnov tests based upon the time-rescaling theorem have proven to be useful for evaluating point-process-based statistical models of single-neuron spike trains. Here we discuss the extension of the time-rescaling theorem to the multivariate (neural population) case. We show that even in the presence of strong correlations between spike trains, models which neglect couplings between neurons can be erroneously passed by the univariate time-rescaling test. We present the multivariate version of the time-rescaling theorem, and provide a practical step-by-step procedure for applying it towards testing the sufficiency of neural population models. Using several simple analytically tractable models and also more complex simulated and real data sets, we demonstrate that important features of the population activity can only be detected using the multivariate extension of the test. PMID:21395436
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ng, B
This survey gives an overview of popular generative models used in the modeling of stochastic temporal systems. In particular, this survey is organized into two parts. The first part discusses the discrete-time representations of dynamic Bayesian networks and dynamic relational probabilistic models, while the second part discusses the continuous-time representation of continuous-time Bayesian networks.
The Variance Reaction Time Model
ERIC Educational Resources Information Center
Sikstrom, Sverker
2004-01-01
The variance reaction time model (VRTM) is proposed to account for various recognition data on reaction time, the mirror effect, receiver-operating-characteristic (ROC) curves, etc. The model is based on simple and plausible assumptions within a neural network: VRTM is a two layer neural network where one layer represents items and one layer…
Student Modeling Based on Problem Solving Times
ERIC Educational Resources Information Center
Pelánek, Radek; Jarušek, Petr
2015-01-01
Student modeling in intelligent tutoring systems is mostly concerned with modeling correctness of students' answers. As interactive problem solving activities become increasingly common in educational systems, it is useful to focus also on timing information associated with problem solving. We argue that the focus on timing is natural for certain…
Modeling and optimum time performance for concurrent processing
NASA Technical Reports Server (NTRS)
Mielke, Roland R.; Stoughton, John W.; Som, Sukhamoy
1988-01-01
The development of a new graph theoretic model for describing the relation between a decomposed algorithm and its execution in a data flow environment is presented. Called ATAMM, the model consists of a set of Petri net marked graphs useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance time measures which determine computing speed and throughput capacity are defined, and the ATAMM model is used to develop lower bounds for these times. A concurrent processing operating strategy for achieving optimum time performance is presented and illustrated by example.
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
Feature Extraction and Classification of Magnetic and EMI Data, Camp Beale, CA
2012-05-01
and non-specialists. However, as part of ESTCP 1004 we are presently working on transitioning our inversion algorithms to an API that will be...10 0 Time (ms) Cell 663 - Target 1965 - Model 1 (SOI) ISO IVS 0.001 0.005 10 0 Time (ms) Cell 1104 - Target 2532 - Model 1 (SOI) ISO IVS...0.0 1 0.005 10 0 Time (ms) Cell 663 - Target 1965 - Model 1 (SOI) ISO IVS 0.0 1 0.005 10 0 Time (ms) Cell 1104 - Target 2532 - Model 1 (SOI
Studies in astronomical time series analysis: Modeling random processes in the time domain
NASA Technical Reports Server (NTRS)
Scargle, J. D.
1979-01-01
Random process models phased in the time domain are used to analyze astrophysical time series data produced by random processes. A moving average (MA) model represents the data as a sequence of pulses occurring randomly in time, with random amplitudes. An autoregressive (AR) model represents the correlations in the process in terms of a linear function of past values. The best AR model is determined from sampled data and transformed to an MA for interpretation. The randomness of the pulse amplitudes is maximized by a FORTRAN algorithm which is relatively stable numerically. Results of test cases are given to study the effects of adding noise and of different distributions for the pulse amplitudes. A preliminary analysis of the optical light curve of the quasar 3C 273 is given.
Keshavarzi, Sareh; Ayatollahi, Seyyed Mohammad Taghi; Zare, Najaf; Pakfetrat, Maryam
2012-01-01
BACKGROUND. In many studies with longitudinal data, time-dependent covariates can only be measured intermittently (not at all observation times), and this presents difficulties for standard statistical analyses. This situation is common in medical studies, and methods that deal with this challenge would be useful. METHODS. In this study, we performed the seemingly unrelated regression (SUR) based models, with respect to each observation time in longitudinal data with intermittently observed time-dependent covariates and further compared these models with mixed-effect regression models (MRMs) under three classic imputation procedures. Simulation studies were performed to compare the sample size properties of the estimated coefficients for different modeling choices. RESULTS. In general, the proposed models in the presence of intermittently observed time-dependent covariates showed a good performance. However, when we considered only the observed values of the covariate without any imputations, the resulted biases were greater. The performances of the proposed SUR-based models in comparison with MRM using classic imputation methods were nearly similar with approximately equal amounts of bias and MSE. CONCLUSION. The simulation study suggests that the SUR-based models work as efficiently as MRM in the case of intermittently observed time-dependent covariates. Thus, it can be used as an alternative to MRM.
The Volume Field Model about Strong Interaction and Weak Interaction
NASA Astrophysics Data System (ADS)
Liu, Rongwu
2016-03-01
For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.
Solution algorithm of dwell time in slope-based figuring model
NASA Astrophysics Data System (ADS)
Li, Yong; Zhou, Lin
2017-10-01
Surface slope profile is commonly used to evaluate X-ray reflective optics, which is used in synchrotron radiation beam. Moreover, the measurement result of measuring instrument for X-ray reflective optics is usually the surface slope profile rather than the surface height profile. To avoid the conversion error, the slope-based figuring model is introduced introduced by processing the X-ray reflective optics based on surface height-based model. However, the pulse iteration method, which can quickly obtain the dell time solution of the traditional height-based figuring model, is not applied to the slope-based figuring model because property of the slope removal function have both positive and negative values and complex asymmetric structure. To overcome this problem, we established the optimal mathematical model for the dwell time solution, By introducing the upper and lower limits of the dwell time and the time gradient constraint. Then we used the constrained least squares algorithm to solve the dwell time in slope-based figuring model. To validate the proposed algorithm, simulations and experiments are conducted. A flat mirror with effective aperture of 80 mm is polished on the ion beam machine. After iterative polishing three times, the surface slope profile error of the workpiece is converged from RMS 5.65 μrad to RMS 1.12 μrad.
NASA Technical Reports Server (NTRS)
Kral, Linda D.; Ladd, John A.; Mani, Mori
1995-01-01
The objective of this viewgraph presentation is to evaluate turbulence models for integrated aircraft components such as the forebody, wing, inlet, diffuser, nozzle, and afterbody. The one-equation models have replaced the algebraic models as the baseline turbulence models. The Spalart-Allmaras one-equation model consistently performs better than the Baldwin-Barth model, particularly in the log-layer and free shear layers. Also, the Sparlart-Allmaras model is not grid dependent like the Baldwin-Barth model. No general turbulence model exists for all engineering applications. The Spalart-Allmaras one-equation model and the Chien k-epsilon models are the preferred turbulence models. Although the two-equation models often better predict the flow field, they may take from two to five times the CPU time. Future directions are in further benchmarking the Menter blended k-w/k-epsilon and algorithmic improvements to reduce CPU time of the two-equation model.
DOT National Transportation Integrated Search
2008-04-01
The objective of this research is to develop alternative time-dependent travel demand models of hurricane evacuation travel and to compare the performance of these models with each other and with the state-of-the-practice models in current use. Speci...
Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Youngsoo; Carlberg, Kevin Thomas
Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less
Hanuschkin, Alexander; Kunkel, Susanne; Helias, Moritz; Morrison, Abigail; Diesmann, Markus
2010-01-01
Traditionally, event-driven simulations have been limited to the very restricted class of neuronal models for which the timing of future spikes can be expressed in closed form. Recently, the class of models that is amenable to event-driven simulation has been extended by the development of techniques to accurately calculate firing times for some integrate-and-fire neuron models that do not enable the prediction of future spikes in closed form. The motivation of this development is the general perception that time-driven simulations are imprecise. Here, we demonstrate that a globally time-driven scheme can calculate firing times that cannot be discriminated from those calculated by an event-driven implementation of the same model; moreover, the time-driven scheme incurs lower computational costs. The key insight is that time-driven methods are based on identifying a threshold crossing in the recent past, which can be implemented by a much simpler algorithm than the techniques for predicting future threshold crossings that are necessary for event-driven approaches. As run time is dominated by the cost of the operations performed at each incoming spike, which includes spike prediction in the case of event-driven simulation and retrospective detection in the case of time-driven simulation, the simple time-driven algorithm outperforms the event-driven approaches. Additionally, our method is generally applicable to all commonly used integrate-and-fire neuronal models; we show that a non-linear model employing a standard adaptive solver can reproduce a reference spike train with a high degree of precision. PMID:21031031
Temporal dynamics of catchment transit times from stable isotope data
NASA Astrophysics Data System (ADS)
Klaus, Julian; Chun, Kwok P.; McGuire, Kevin J.; McDonnell, Jeffrey J.
2015-06-01
Time variant catchment transit time distributions are fundamental descriptors of catchment function but yet not fully understood, characterized, and modeled. Here we present a new approach for use with standard runoff and tracer data sets that is based on tracking of tracer and age information and time variant catchment mixing. Our new approach is able to deal with nonstationarity of flow paths and catchment mixing, and an irregular shape of the transit time distribution. The approach extracts information on catchment mixing from the stable isotope time series instead of prior assumptions of mixing or the shape of transit time distribution. We first demonstrate proof of concept of the approach with artificial data; the Nash-Sutcliffe efficiencies in tracer and instantaneous transit times were >0.9. The model provides very accurate estimates of time variant transit times when the boundary conditions and fluxes are fully known. We then tested the model with real rainfall-runoff flow and isotope tracer time series from the H.J. Andrews Watershed 10 (WS10) in Oregon. Model efficiencies were 0.37 for the 18O modeling for a 2 year time series; the efficiencies increased to 0.86 for the second year underlying the need of long time tracer time series with a long overlap of tracer input and output. The approach was able to determine time variant transit time of WS10 with field data and showed how it follows the storage dynamics and related changes in flow paths where wet periods with high flows resulted in clearly shorter transit times compared to dry low flow periods.
Mediterranean space-time extremes of wind wave sea states
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro; Marcello Falcieri, Francesco; Bonaldo, Davide; Bergamasco, Andrea; Benetazzo, Alvise
2014-05-01
Traditionally, wind wave sea states during storms have been observed, modeled, and predicted mostly in the time domain, i.e. at a fixed point. In fact, the standard statistical models used in ocean waves analysis rely on the implicit assumption of long-crested waves. Nevertheless, waves in storms are mainly short-crested. Hence, spatio-temporal features of the wave field are crucial to accurately model the sea state characteristics and to provide reliable predictions, particurly of wave extremes. Indeed, the experimental evidence provided by novel instrumentations, e.g. WASS (Wave Acquisition Stereo System), showed that the maximum sea surface elevation gathered in time over an area, i.e. the space-time extreme, is larger than that one measured in time at a point, i.e. the time extreme. Recently, stochastic models used to estimate maxima of multidimensional Gaussian random fields have been applied to ocean waves statistics. These models are based either on Piterbarg's theorem or Adler and Taylor's Euler Characteristics approach. Besides a probability of exceedance of a certain threshold, they can provide the expected space-time extreme of a sea state, as long as space-time wave features (i.e. some parameters of the directional variance density spectrum) are known. These models have been recently validated against WASS observation from fixed and moving platforms. In this context, our focus was modeling and predicting extremes of wind waves during storms. Thus, to intensively gather space-time extremes data over the Mediterranean region, we used directional spectra provided by the numerical wave model SWAN (Simulating WAves Nearshore). Therefore, we set up a 6x6 km2 resolution grid entailing most of the Mediterranean Sea and we forced it with COSMO-I7 high resolution (7x7 km2) hourly wind fields, within 2007-2013 period. To obtain the space-time features, i.e. the spectral parameters, at each grid node and over the 6 simulated years, we developed a modified version of the SWAN model, the SWAN Space-Time (SWAN-ST). SWAN-ST results were post-processed to obtain the expected space-time extremes over the model domain. To this end, we applied the stochastic model of Fedele, developed starting from Adler and Taylor's approach, which we found to be more accurate and versatile with respect to Piterbarg's theorem. Results we obtained provide an alternative sight on Mediterranean extreme wave climate, which could represent the first step towards operationl forecasting of space-time wave extremes, on the one hand, and the basis for a novel statistical standard wave model, on the other. These results may benefit marine designers, seafarers and other subjects operating at sea and exposed to the frequent and severe hazard represented by extreme wave conditions.
NASA AVOSS Fast-Time Wake Prediction Models: User's Guide
NASA Technical Reports Server (NTRS)
Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew
2014-01-01
The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.
A new molecular evolution model for limited insertion independent of substitution.
Lèbre, Sophie; Michel, Christian J
2013-10-01
We recently introduced a new molecular evolution model called the IDIS model for Insertion Deletion Independent of Substitution [13,14]. In the IDIS model, the three independent processes of substitution, insertion and deletion of residues have constant rates. In order to control the genome expansion during evolution, we generalize here the IDIS model by introducing an insertion rate which decreases when the sequence grows and tends to 0 for a maximum sequence length nmax. This new model, called LIIS for Limited Insertion Independent of Substitution, defines a matrix differential equation satisfied by a vector P(t) describing the sequence content in each residue at evolution time t. An analytical solution is obtained for any diagonalizable substitution matrix M. Thus, the LIIS model gives an expression of the sequence content vector P(t) in each residue under evolution time t as a function of the eigenvalues and the eigenvectors of matrix M, the residue insertion rate vector R, the total insertion rate r, the initial and maximum sequence lengths n0 and nmax, respectively, and the sequence content vector P(t0) at initial time t0. The derivation of the analytical solution is much more technical, compared to the IDIS model, as it involves Gauss hypergeometric functions. Several propositions of the LIIS model are derived: proof that the IDIS model is a particular case of the LIIS model when the maximum sequence length nmax tends to infinity, fixed point, time scale, time step and time inversion. Using a relation between the sequence length l and the evolution time t, an expression of the LIIS model as a function of the sequence length l=n(t) is obtained. Formulas for 'insertion only', i.e. when the substitution rates are all equal to 0, are derived at evolution time t and sequence length l. Analytical solutions of the LIIS model are explicitly derived, as a function of either evolution time t or sequence length l, for two classical substitution matrices: the 3-parameter symmetric substitution matrix [12] (LIIS-SYM3) and the HKY asymmetric substitution matrix[9] (LIIS-HKY). An evaluation of the LIIS model (precisely, LIIS-HKY) based on four statistical analyses of the GC content in complete genomes of four prokaryotic taxonomic groups, namely Chlamydiae, Crenarchaeota, Spirochaetes and Thermotogae, shows the expected improvement from the theory of the LIIS model compared to the IDIS model. Copyright © 2013 Elsevier Inc. All rights reserved.
Ludescher, Josef; Bunde, Armin
2014-12-01
We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution P(Q)(r) of the interoccurrence times r between losses below a negative threshold -Q, for fixed mean interoccurrence times R(Q) in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, P(Q)(r)∝1/{[1+(q-1)βr](1/(q-1))}. We propose that the asset- and time-scale-independent analytic form of P(Q)(r) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution P(Q)(r) as well as the autocorrelation C(Q)(s) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q-exponential form of P(Q)(r) and the power-law decay of C(Q)(s).
Reduced rank models for travel time estimation of low order mode pulses.
Chandrayadula, Tarun K; Wage, Kathleen E; Worcester, Peter F; Dzieciuch, Matthew A; Mercer, James A; Andrew, Rex K; Howe, Bruce M
2013-10-01
Mode travel time estimation in the presence of internal waves (IWs) is a challenging problem. IWs perturb the sound speed, which results in travel time wander and mode scattering. A standard approach to travel time estimation is to pulse compress the broadband signal, pick the peak of the compressed time series, and average the peak time over multiple receptions to reduce variance. The peak-picking approach implicitly assumes there is a single strong arrival and does not perform well when there are multiple arrivals due to scattering. This article presents a statistical model for the scattered mode arrivals and uses the model to design improved travel time estimators. The model is based on an Empirical Orthogonal Function (EOF) analysis of the mode time series. Range-dependent simulations and data from the Long-range Ocean Acoustic Propagation Experiment (LOAPEX) indicate that the modes are represented by a small number of EOFs. The reduced-rank EOF model is used to construct a travel time estimator based on the Matched Subspace Detector (MSD). Analysis of simulation and experimental data show that the MSDs are more robust to IW scattering than peak picking. The simulation analysis also highlights how IWs affect the mode excitation by the source.
NASA Astrophysics Data System (ADS)
Ludescher, Josef; Bunde, Armin
2014-12-01
We consider representative financial records (stocks and indices) on time scales between one minute and one day, as well as historical monthly data sets, and show that the distribution PQ(r ) of the interoccurrence times r between losses below a negative threshold -Q , for fixed mean interoccurrence times RQ in multiples of the corresponding time resolutions, can be described on all time scales by the same q exponentials, PQ(r ) ∝1 /{[1+(q -1 ) β r ] 1 /(q -1 )} . We propose that the asset- and time-scale-independent analytic form of PQ(r ) can be regarded as an additional stylized fact of the financial markets and represents a nontrivial test for market models. We analyze the distribution PQ(r ) as well as the autocorrelation CQ(s ) of the interoccurrence times for three market models: (i) multiplicative random cascades, (ii) multifractal random walks, and (iii) the generalized autoregressive conditional heteroskedasticity [GARCH(1,1)] model. We find that only one of the considered models, the multifractal random walk model, approximately reproduces the q -exponential form of PQ(r ) and the power-law decay of CQ(s ) .
NASA Astrophysics Data System (ADS)
Talaghat, Mohammad Reza; Jokar, Seyyed Mohammad
2018-03-01
The induction time is a time interval to detect the initial hydrate formation, which is counted from the moment when the stirrer is turned on until the first detection of hydrate formation. The main objective of the present work is to predict and measure the induction time of methane hydrate formation in the presence or absence of tetrahydrofuran (THF) as promoter in the flow loop system. A laboratory flow mini-loop apparatus was set up to measure the induction time of methane hydrate formation. The induction time is predicted using developed Kashchiev and Firoozabadi model and modified model of Natarajan for a flow loop system. Furthermore, the effects of volumetric flow rate of the fluid on the induction time were investigated. The results of the models were compared with experimental data. They show that the induction time of hydrate formation in the presence of THF is very short at high pressure and high volumetric flow rate of the fluid. It decreases with increasing pressure and liquid volumetric flow rate. It is also shown that the modified Natarajan model is more accurate than the Kashchiev and Firoozabadi ones in prediction of the induction time.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
NASA Astrophysics Data System (ADS)
Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang
2017-10-01
Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.
Inflow forecasting model construction with stochastic time series for coordinated dam operation
NASA Astrophysics Data System (ADS)
Kim, T.; Jung, Y.; Kim, H.; Heo, J. H.
2014-12-01
Dam inflow forecasting is one of the most important tasks in dam operation for an effective water resources management and control. In general, dam inflow forecasting with stochastic time series model is possible to apply when the data is stationary because most of stochastic process based on stationarity. However, recent hydrological data cannot be satisfied the stationarity anymore because of climate change. Therefore a stochastic time series model, which can consider seasonality and trend in the data series, named SARIMAX(Seasonal Autoregressive Integrated Average with eXternal variable) model were constructed in this study. This SARIMAX model could increase the performance of stochastic time series model by considering the nonstationarity components and external variable such as precipitation. For application, the models were constructed for four coordinated dams on Han river in South Korea with monthly time series data. As a result, the models of each dam have similar performance and it would be possible to use the model for coordinated dam operation.Acknowledgement This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-NH-12-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Seismic wavefield modeling based on time-domain symplectic and Fourier finite-difference method
NASA Astrophysics Data System (ADS)
Fang, Gang; Ba, Jing; Liu, Xin-xin; Zhu, Kun; Liu, Guo-Chang
2017-06-01
Seismic wavefield modeling is important for improving seismic data processing and interpretation. Calculations of wavefield propagation are sometimes not stable when forward modeling of seismic wave uses large time steps for long times. Based on the Hamiltonian expression of the acoustic wave equation, we propose a structure-preserving method for seismic wavefield modeling by applying the symplectic finite-difference method on time grids and the Fourier finite-difference method on space grids to solve the acoustic wave equation. The proposed method is called the symplectic Fourier finite-difference (symplectic FFD) method, and offers high computational accuracy and improves the computational stability. Using acoustic approximation, we extend the method to anisotropic media. We discuss the calculations in the symplectic FFD method for seismic wavefield modeling of isotropic and anisotropic media, and use the BP salt model and BP TTI model to test the proposed method. The numerical examples suggest that the proposed method can be used in seismic modeling of strongly variable velocities, offering high computational accuracy and low numerical dispersion. The symplectic FFD method overcomes the residual qSV wave of seismic modeling in anisotropic media and maintains the stability of the wavefield propagation for large time steps.
Igarashi, Jun; Shouno, Osamu; Fukai, Tomoki; Tsujino, Hiroshi
2011-11-01
Real-time simulation of a biologically realistic spiking neural network is necessary for evaluation of its capacity to interact with real environments. However, the real-time simulation of such a neural network is difficult due to its high computational costs that arise from two factors: (1) vast network size and (2) the complicated dynamics of biologically realistic neurons. In order to address these problems, mainly the latter, we chose to use general purpose computing on graphics processing units (GPGPUs) for simulation of such a neural network, taking advantage of the powerful computational capability of a graphics processing unit (GPU). As a target for real-time simulation, we used a model of the basal ganglia that has been developed according to electrophysiological and anatomical knowledge. The model consists of heterogeneous populations of 370 spiking model neurons, including computationally heavy conductance-based models, connected by 11,002 synapses. Simulation of the model has not yet been performed in real-time using a general computing server. By parallelization of the model on the NVIDIA Geforce GTX 280 GPU in data-parallel and task-parallel fashion, faster-than-real-time simulation was robustly realized with only one-third of the GPU's total computational resources. Furthermore, we used the GPU's full computational resources to perform faster-than-real-time simulation of three instances of the basal ganglia model; these instances consisted of 1100 neurons and 33,006 synapses and were synchronized at each calculation step. Finally, we developed software for simultaneous visualization of faster-than-real-time simulation output. These results suggest the potential power of GPGPU techniques in real-time simulation of realistic neural networks. Copyright © 2011 Elsevier Ltd. All rights reserved.
Fast Algorithms for Mining Co-evolving Time Series
2011-09-01
Keogh et al., 2001, 2004] and (b) forecasting, like an autoregressive integrated moving average model ( ARIMA ) and related meth- ods [Box et al., 1994...computing hardware? We develop models to mine time series with missing values, to extract compact representation from time sequences, to segment the...sequences, and to do forecasting. For large scale data, we propose algorithms for learning time series models , in particular, including Linear Dynamical
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Queueing analysis of a canonical model of real-time multiprocessors
NASA Technical Reports Server (NTRS)
Krishna, C. M.; Shin, K. G.
1983-01-01
A logical classification of multiprocessor structures from the point of view of control applications is presented. A computation of the response time distribution for a canonical model of a real time multiprocessor is presented. The multiprocessor is approximated by a blocking model. Two separate models are derived: one created from the system's point of view, and the other from the point of view of an incoming task.
Markov Chain Models for Stochastic Behavior in Resonance Overlap Regions
NASA Astrophysics Data System (ADS)
McCarthy, Morgan; Quillen, Alice
2018-01-01
We aim to predict lifetimes of particles in chaotic zoneswhere resonances overlap. A continuous-time Markov chain model isconstructed using mean motion resonance libration timescales toestimate transition times between resonances. The model is applied todiffusion in the co-rotation region of a planet. For particles begunat low eccentricity, the model is effective for early diffusion, butnot at later time when particles experience close encounters to the planet.
Improved Regional Seismic Event Locations Using 3-D Velocity Models
1999-12-15
regional velocity model to estimate event hypocenters. Travel times for the regional phases are calculated using a sophisticated eikonal finite...can greatly improve estimates of event locations. Our algorithm calculates travel times using a finite difference approximation of the eikonal ...such as IASP91 or J-B. 3-D velocity models require more sophisticated travel time modeling routines; thus, we use a 3-D eikonal equation solver
Tidal dissipation in a viscoelastic planet
NASA Technical Reports Server (NTRS)
Ross, M.; Schubert, G.
1986-01-01
Tidal dissipation is examined using Maxwell standard liner solid (SLS), and Kelvin-Voigt models, and viscosity parameters are derived from the models that yield the amount of dissipation previously calculated for a moon model with QW = 100 in a hypothetical orbit closer to the earth. The relevance of these models is then assessed for simulating planetary tidal responses. Viscosities of 10 exp 14 and 10 ex 18 Pa s for the Kelvin-Voigt and Maxwell rheologies, respectively, are needed to match the dissipation rate calculated using the Q approach with a quality factor = 100. The SLS model requires a short time viscosity of 3 x 10 exp 17 Pa s to match the Q = 100 dissipation rate independent of the model's relaxation strength. Since Q = 100 is considered a representative value for the interiors of terrestrial planets, it is proposed that derived viscosities should characterize planetary materials. However, it is shown that neither the Kelvin-Voigt nor the SLS models simulate the behavior of real planetary materials on long time scales. The Maxwell model, by contrast, behaves realistically on both long and short time scales. The inferred Maxwell viscosity, corresponding to the time scale of days, is several times smaller than the longer time scale (greater than or equal to 10 exp 14 years) viscosity of the earth's mantle.
Fujii, Keisuke; Shinya, Masahiro; Yamashita, Daichi; Kouzaki, Motoki; Oda, Shingo
2014-01-01
We previously estimated the timing when ball game defenders detect relevant information through visual input for reacting to an attacker's running direction after a cutting manoeuvre, called cue timing. The purpose of this study was to investigate what specific information is relevant for defenders, and how defenders process this information to decide on their opponents' running direction. In this study, we hypothesised that defenders extract information regarding the position and velocity of the attackers' centre of mass (CoM) and the contact foot. We used a model which simulates the future trajectory of the opponent's CoM based upon an inverted pendulum movement. The hypothesis was tested by comparing observed defender's cue timing, model-estimated cue timing using the inverted pendulum model (IPM cue timing) and cue timing using only the current CoM position (CoM cue timing). The IPM cue timing was defined as the time when the simulated pendulum falls leftward or rightward given the initial values for position and velocity of the CoM and the contact foot at the time. The model-estimated IPM cue timing and the empirically observed defender's cue timing were comparable in median value and were significantly correlated, whereas the CoM cue timing was significantly more delayed than the IPM and the defender's cue timings. Based on these results, we discuss the possibility that defenders may be able to anticipate the future direction of an attacker by forwardly simulating inverted pendulum movement.
Pasma, Jantsje H.; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C.
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control. PMID:29615886
Pasma, Jantsje H; Assländer, Lorenz; van Kordelaar, Joost; de Kam, Digna; Mergner, Thomas; Schouten, Alfred C
2018-01-01
The Independent Channel (IC) model is a commonly used linear balance control model in the frequency domain to analyze human balance control using system identification and parameter estimation. The IC model is a rudimentary and noise-free description of balance behavior in the frequency domain, where a stable model representation is not guaranteed. In this study, we conducted firstly time-domain simulations with added noise, and secondly robot experiments by implementing the IC model in a real-world robot (PostuRob II) to test the validity and stability of the model in the time domain and for real world situations. Balance behavior of seven healthy participants was measured during upright stance by applying pseudorandom continuous support surface rotations. System identification and parameter estimation were used to describe the balance behavior with the IC model in the frequency domain. The IC model with the estimated parameters from human experiments was implemented in Simulink for computer simulations including noise in the time domain and robot experiments using the humanoid robot PostuRob II. Again, system identification and parameter estimation were used to describe the simulated balance behavior. Time series, Frequency Response Functions, and estimated parameters from human experiments, computer simulations, and robot experiments were compared with each other. The computer simulations showed similar balance behavior and estimated control parameters compared to the human experiments, in the time and frequency domain. Also, the IC model was able to control the humanoid robot by keeping it upright, but showed small differences compared to the human experiments in the time and frequency domain, especially at high frequencies. We conclude that the IC model, a descriptive model in the frequency domain, can imitate human balance behavior also in the time domain, both in computer simulations with added noise and real world situations with a humanoid robot. This provides further evidence that the IC model is a valid description of human balance control.
Draft Forecasts from Real-Time Runs of Physics-Based Models - A Road to the Future
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2008-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models, and on the transition of appropriate models to space weather forecast centers. As part of the latter activity, the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOAA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.
Road safety forecasts in five European countries using structural time series models.
Antoniou, Constantinos; Papadimitriou, Eleonora; Yannis, George
2014-01-01
Modeling road safety development is a complex task and needs to consider both the quantifiable impact of specific parameters as well as the underlying trends that cannot always be measured or observed. The objective of this research is to apply structural time series models for obtaining reliable medium- to long-term forecasts of road traffic fatality risk using data from 5 countries with different characteristics from all over Europe (Cyprus, Greece, Hungary, Norway, and Switzerland). Two structural time series models are considered: (1) the local linear trend model and the (2) latent risk time series model. Furthermore, a structured decision tree for the selection of the applicable model for each situation (developed within the Road Safety Data, Collection, Transfer and Analysis [DaCoTA] research project, cofunded by the European Commission) is outlined. First, the fatality and exposure data that are used for the development of the models are presented and explored. Then, the modeling process is presented, including the model selection process, introduction of intervention variables, and development of mobility scenarios. The forecasts using the developed models appear to be realistic and within acceptable confidence intervals. The proposed methodology is proved to be very efficient for handling different cases of data availability and quality, providing an appropriate alternative from the family of structural time series models in each country. A concluding section providing perspectives and directions for future research is presented.
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
Predicting mortality over different time horizons: which data elements are needed?
Goldstein, Benjamin A; Pencina, Michael J; Montez-Rath, Maria E; Winkelmayer, Wolfgang C
2017-01-01
Electronic health records (EHRs) are a resource for "big data" analytics, containing a variety of data elements. We investigate how different categories of information contribute to prediction of mortality over different time horizons among patients undergoing hemodialysis treatment. We derived prediction models for mortality over 7 time horizons using EHR data on older patients from a national chain of dialysis clinics linked with administrative data using LASSO (least absolute shrinkage and selection operator) regression. We assessed how different categories of information relate to risk assessment and compared discrete models to time-to-event models. The best predictors used all the available data (c-statistic ranged from 0.72-0.76), with stronger models in the near term. While different variable groups showed different utility, exclusion of any particular group did not lead to a meaningfully different risk assessment. Discrete time models performed better than time-to-event models. Different variable groups were predictive over different time horizons, with vital signs most predictive for near-term mortality and demographic and comorbidities more important in long-term mortality. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Molenaar, Dylan; de Boeck, Paul
2018-06-01
In item response theory modeling of responses and response times, it is commonly assumed that the item responses have the same characteristics across the response times. However, heterogeneity might arise in the data if subjects resort to different response processes when solving the test items. These differences may be within-subject effects, that is, a subject might use a certain process on some of the items and a different process with different item characteristics on the other items. If the probability of using one process over the other process depends on the subject's response time, within-subject heterogeneity of the item characteristics across the response times arises. In this paper, the method of response mixture modeling is presented to account for such heterogeneity. Contrary to traditional mixture modeling where the full response vectors are classified, response mixture modeling involves classification of the individual elements in the response vector. In a simulation study, the response mixture model is shown to be viable in terms of parameter recovery. In addition, the response mixture model is applied to a real dataset to illustrate its use in investigating within-subject heterogeneity in the item characteristics across response times.
Mesoscale Simulation Data for Initializing Fast-Time Wake Transport and Decay Models
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.; Vanvalkenburg, Randal L.; Pruis, Mathew J.; LimonDuparcmeur, Fanny M.
2012-01-01
The fast-time wake transport and decay models require vertical profiles of crosswinds, potential temperature and the eddy dissipation rate as initial conditions. These inputs are normally obtained from various field sensors. In case of data-denied scenarios or operational use, these initial conditions can be provided by mesoscale model simulations. In this study, the vertical profiles of potential temperature from a mesoscale model were used as initial conditions for the fast-time wake models. The mesoscale model simulations were compared against available observations and the wake model predictions were compared with the Lidar measurements from three wake vortex field experiments.
Catalytic ignition model in a monolithic reactor with in-depth reaction
NASA Technical Reports Server (NTRS)
Tien, Ta-Ching; Tien, James S.
1990-01-01
Two transient models have been developed to study the catalytic ignition in a monolithic catalytic reactor. The special feature in these models is the inclusion of thermal and species structures in the porous catalytic layer. There are many time scales involved in the catalytic ignition problem, and these two models are developed with different time scales. In the full transient model, the equations are non-dimensionalized by the shortest time scale (mass diffusion across the catalytic layer). It is therefore accurate but is computationally costly. In the energy-integral model, only the slowest process (solid heat-up) is taken as nonsteady. It is approximate but computationally efficient. In the computations performed, the catalyst is platinum and the reactants are rich mixtures of hydrogen and oxygen. One-step global chemical reaction rates are used for both gas-phase homogeneous reaction and catalytic heterogeneous reaction. The computed results reveal the transient ignition processes in detail, including the structure variation with time in the reactive catalytic layer. An ignition map using reactor length and catalyst loading is constructed. The comparison of computed results between the two transient models verifies the applicability of the energy-integral model when the time is greater than the second largest time scale of the system. It also suggests that a proper combined use of the two models can catch all the transient phenomena while minimizing the computational cost.
NASA Astrophysics Data System (ADS)
Drew, G. H.; Smith, R.; Gerard, V.; Burge, C.; Lowe, M.; Kinnersley, R.; Sneath, R.; Longhurst, P. J.
Odour emissions are episodic, characterised by periods of high emission rates, interspersed with periods of low emissions. It is frequently the short term, high concentration peaks that result in annoyance in the surrounding population. Dispersion modelling is accepted as a useful tool for odour impact assessment, and two approaches can be adopted. The first approach of modelling the hourly average concentration can underestimate total odour concentration peaks, resulting in annoyance and complaints. The second modelling approach involves the use of short averaging times. This study assesses the appropriateness of using different averaging times to model the dispersion of odour from a landfill site. We also examine perception of odour in the community in conjunction with the modelled odour dispersal, by using community monitors to record incidents of odour. The results show that with the shorter averaging times, the modelled pattern of dispersal reflects the pattern of observed odour incidents recorded in the community monitoring database, with the modelled odour dispersing further in a north easterly direction. Therefore, the current regulatory method of dispersion modelling, using hourly averaging times, is less successful at capturing peak concentrations, and does not capture the pattern of odour emission as indicated by the community monitoring database. The use of short averaging times is therefore of greater value in predicting the likely nuisance impact of an odour source and in framing appropriate regulatory controls.
NASA Astrophysics Data System (ADS)
Passarelli, Luigi; Sanso, Bruno; Laura, Sandri; Marzocchi, Warner
2010-05-01
One of the main goals in volcanology is to forecast volcanic eruptions. A trenchant forecast should be made before the onset of a volcanic eruption, using the data available at that time, with the aim of mitigating the volcanic risk associated to the volcanic event. In other words, models implemented with forecast purposes have to take into account the possibility to provide "forward" forecasts and should avoid the idea of a merely "retrospective" fitting of the data available. In this perspective, the main idea of the present model is to forecast the next volcanic eruption after the end of the last one, using only the data available at that time. We focus our attention on volcanoes with open conduit regime and high eruption frequency. We assume a generalization of the classical time predictable model to describe the eruptive behavior of open conduit volcanoes and we use a Bayesian hierarchical model to make probabilistic forecast. We apply the model to Kilauea volcano eruptive data and Mt. Etna volcano flank eruption data. The aims of this model are: 1) to test whether or not the Kilauea and Mt Etna volcanoes follow a time predictable behavior; 2) to discuss the volcanological implications of the time predictable model parameters inferred; 3) to compare the forecast capabilities of this model with other models present in literature. The results obtained using the MCMC sampling algorithm show that both volcanoes follow a time predictable behavior. The numerical values of the time predictable model parameters inferred suggest that the amount of the erupted volume could change the dynamics of the magma chamber refilling process during the repose period. The probability gain of this model compared with other models already present in literature is appreciably greater than zero. This means that our model performs better forecast than previous models and it could be used in a probabilistic volcanic hazard assessment scheme. In this perspective, the probability of eruptions given by our model for Mt Etna volcano flank eruption are published on a internet website and are updated after any change in the eruptive activity.
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
Joint modelling of repeated measurement and time-to-event data: an introductory tutorial.
Asar, Özgür; Ritchie, James; Kalra, Philip A; Diggle, Peter J
2015-02-01
The term 'joint modelling' is used in the statistical literature to refer to methods for simultaneously analysing longitudinal measurement outcomes, also called repeated measurement data, and time-to-event outcomes, also called survival data. A typical example from nephrology is a study in which the data from each participant consist of repeated estimated glomerular filtration rate (eGFR) measurements and time to initiation of renal replacement therapy (RRT). Joint models typically combine linear mixed effects models for repeated measurements and Cox models for censored survival outcomes. Our aim in this paper is to present an introductory tutorial on joint modelling methods, with a case study in nephrology. We describe the development of the joint modelling framework and compare the results with those obtained by the more widely used approaches of conducting separate analyses of the repeated measurements and survival times based on a linear mixed effects model and a Cox model, respectively. Our case study concerns a data set from the Chronic Renal Insufficiency Standards Implementation Study (CRISIS). We also provide details of our open-source software implementation to allow others to replicate and/or modify our analysis. The results for the conventional linear mixed effects model and the longitudinal component of the joint models were found to be similar. However, there were considerable differences between the results for the Cox model with time-varying covariate and the time-to-event component of the joint model. For example, the relationship between kidney function as measured by eGFR and the hazard for initiation of RRT was significantly underestimated by the Cox model that treats eGFR as a time-varying covariate, because the Cox model does not take measurement error in eGFR into account. Joint models should be preferred for simultaneous analyses of repeated measurement and survival data, especially when the former is measured with error and the association between the underlying error-free measurement process and the hazard for survival is of scientific interest. © The Author 2015; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
NASA Astrophysics Data System (ADS)
Liu, Z.; Rajib, M. A.; Jafarzadegan, K.; Merwade, V.
2015-12-01
Application of land surface/hydrologic models within an operational flood forecasting system can provide probable time of occurrence and magnitude of streamflow at specific locations along a stream. Creating time-varying spatial extent of flood inundation and depth requires the use of a hydraulic or hydrodynamic model. Models differ in representing river geometry and surface roughness which can lead to different output depending on the particular model being used. The result from a single hydraulic model provides just one possible realization of the flood extent without capturing the uncertainty associated with the input or the model parameters. The objective of this study is to compare multiple hydraulic models toward generating ensemble flood inundation extents. Specifically, relative performances of four hydraulic models, including AutoRoute, HEC-RAS, HEC-RAS 2D, and LISFLOOD are evaluated under different geophysical conditions in several locations across the United States. By using streamflow output from the same hydrologic model (SWAT in this case), hydraulic simulations are conducted for three configurations: (i) hindcasting mode by using past observed weather data at daily time scale in which models are being calibrated against USGS streamflow observations, (ii) validation mode using near real-time weather data at sub-daily time scale, and (iii) design mode with extreme streamflow data having specific return periods. Model generated inundation maps for observed flood events both from hindcasting and validation modes are compared with remotely sensed images, whereas the design mode outcomes are compared with corresponding FEMA generated flood hazard maps. The comparisons presented here will give insights on probable model-specific nature of biases and their relative advantages/disadvantages as components of an operational flood forecasting system.
Jomah, N D; Ojo, J F; Odigie, E A; Olugasa, B O
2014-12-01
The post-civil war records of dog bite injuries (DBI) and rabies-like-illness (RLI) among humans in Liberia is a vital epidemiological resource for developing a predictive model to guide the allocation of resources towards human rabies control. Whereas DBI and RLI are high, they are largely under-reported. The objective of this study was to develop a time model of the case-pattern and apply it to derive predictors of time-trend point distribution of DBI-RLI cases. A retrospective 6 years data of DBI distribution among humans countrywide were converted to quarterly series using a transformation technique of Minimizing Squared First Difference statistic. The generated dataset was used to train a time-trend model of the DBI-RLI syndrome in Liberia. An additive detenninistic time-trend model was selected due to its performance compared to multiplication model of trend and seasonal movement. Parameter predictors were run on least square method to predict DBI cases for a prospective 4 years period, covering 2014-2017. The two-stage predictive model of DBI case-pattern between 2014 and 2017 was characterised by a uniform upward trend within Liberia's coastal and hinterland Counties over the forecast period. This paper describes a translational application of the time-trend distribution pattern of DBI epidemics, 2008-2013 reported in Liberia, on which a predictive model was developed. A computationally feasible two-stage time-trend permutation approach is proposed to estimate the time-trend parameters and conduct predictive inference on DBI-RLI in Liberia.
Using Time Series Analysis to Predict Cardiac Arrest in a PICU.
Kennedy, Curtis E; Aoki, Noriaki; Mariscalco, Michele; Turley, James P
2015-11-01
To build and test cardiac arrest prediction models in a PICU, using time series analysis as input, and to measure changes in prediction accuracy attributable to different classes of time series data. Retrospective cohort study. Thirty-one bed academic PICU that provides care for medical and general surgical (not congenital heart surgery) patients. Patients experiencing a cardiac arrest in the PICU and requiring external cardiac massage for at least 2 minutes. None. One hundred three cases of cardiac arrest and 109 control cases were used to prepare a baseline dataset that consisted of 1,025 variables in four data classes: multivariate, raw time series, clinical calculations, and time series trend analysis. We trained 20 arrest prediction models using a matrix of five feature sets (combinations of data classes) with four modeling algorithms: linear regression, decision tree, neural network, and support vector machine. The reference model (multivariate data with regression algorithm) had an accuracy of 78% and 87% area under the receiver operating characteristic curve. The best model (multivariate + trend analysis data with support vector machine algorithm) had an accuracy of 94% and 98% area under the receiver operating characteristic curve. Cardiac arrest predictions based on a traditional model built with multivariate data and a regression algorithm misclassified cases 3.7 times more frequently than predictions that included time series trend analysis and built with a support vector machine algorithm. Although the final model lacks the specificity necessary for clinical application, we have demonstrated how information from time series data can be used to increase the accuracy of clinical prediction models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koven, C. D.; Chambers, J. Q.; Georgiou, K.
To better understand sources of uncertainty in projections of terrestrial carbon cycle feedbacks, we present an approach to separate the controls on modeled carbon changes. We separate carbon changes into four categories using a linearized, equilibrium approach: those arising from changed inputs (productivity-driven changes), and outputs (turnover-driven changes), of both the live and dead carbon pools. Using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations for five models, we find that changes to the live pools are primarily explained by productivity-driven changes, with only one model showing large compensating changes to live carbon turnover times. For dead carbon pools, themore » situation is more complex as all models predict a large reduction in turnover times in response to increases in productivity. This response arises from the common representation of a broad spectrum of decomposition turnover times via a multi-pool approach, in which flux-weighted turnover times are faster than mass-weighted turnover times. This leads to a shift in the distribution of carbon among dead pools in response to changes in inputs, and therefore a transient but long-lived reduction in turnover times. Since this behavior, a reduction in inferred turnover times resulting from an increase in inputs, is superficially similar to priming processes, but occurring without the mechanisms responsible for priming, we call the phenomenon "false priming", and show that it masks much of the intrinsic changes to dead carbon turnover times as a result of changing climate. These patterns hold across the fully coupled, biogeochemically coupled, and radiatively coupled 1 % yr −1 increasing CO 2 experiments. We disaggregate inter-model uncertainty in the globally integrated equilibrium carbon responses to initial turnover times, initial productivity, fractional changes in turnover, and fractional changes in productivity. For both the live and dead carbon pools, inter-model spread in carbon changes arising from initial conditions is dominated by model disagreement on turnover times, whereas inter-model spread in carbon changes from fractional changes to these terms is dominated by model disagreement on changes to productivity in response to both warming and CO 2 fertilization. However, the lack of changing turnover time control on carbon responses, for both live and dead carbon pools, in response to the imposed forcings may arise from a common lack of process representation behind changing turnover times (e.g., allocation and mortality for live carbon; permafrost, microbial dynamics, and mineral stabilization for dead carbon), rather than a true estimate of the importance of these processes.« less
Koven, C. D.; Chambers, J. Q.; Georgiou, K.; ...
2015-09-07
To better understand sources of uncertainty in projections of terrestrial carbon cycle feedbacks, we present an approach to separate the controls on modeled carbon changes. We separate carbon changes into four categories using a linearized, equilibrium approach: those arising from changed inputs (productivity-driven changes), and outputs (turnover-driven changes), of both the live and dead carbon pools. Using Coupled Model Intercomparison Project Phase 5 (CMIP5) simulations for five models, we find that changes to the live pools are primarily explained by productivity-driven changes, with only one model showing large compensating changes to live carbon turnover times. For dead carbon pools, themore » situation is more complex as all models predict a large reduction in turnover times in response to increases in productivity. This response arises from the common representation of a broad spectrum of decomposition turnover times via a multi-pool approach, in which flux-weighted turnover times are faster than mass-weighted turnover times. This leads to a shift in the distribution of carbon among dead pools in response to changes in inputs, and therefore a transient but long-lived reduction in turnover times. Since this behavior, a reduction in inferred turnover times resulting from an increase in inputs, is superficially similar to priming processes, but occurring without the mechanisms responsible for priming, we call the phenomenon "false priming", and show that it masks much of the intrinsic changes to dead carbon turnover times as a result of changing climate. These patterns hold across the fully coupled, biogeochemically coupled, and radiatively coupled 1 % yr −1 increasing CO 2 experiments. We disaggregate inter-model uncertainty in the globally integrated equilibrium carbon responses to initial turnover times, initial productivity, fractional changes in turnover, and fractional changes in productivity. For both the live and dead carbon pools, inter-model spread in carbon changes arising from initial conditions is dominated by model disagreement on turnover times, whereas inter-model spread in carbon changes from fractional changes to these terms is dominated by model disagreement on changes to productivity in response to both warming and CO 2 fertilization. However, the lack of changing turnover time control on carbon responses, for both live and dead carbon pools, in response to the imposed forcings may arise from a common lack of process representation behind changing turnover times (e.g., allocation and mortality for live carbon; permafrost, microbial dynamics, and mineral stabilization for dead carbon), rather than a true estimate of the importance of these processes.« less
Stochastic modeling of hourly rainfall times series in Campania (Italy)
NASA Astrophysics Data System (ADS)
Giorgio, M.; Greco, R.
2009-04-01
Occurrence of flowslides and floods in small catchments is uneasy to predict, since it is affected by a number of variables, such as mechanical and hydraulic soil properties, slope morphology, vegetation coverage, rainfall spatial and temporal variability. Consequently, landslide risk assessment procedures and early warning systems still rely on simple empirical models based on correlation between recorded rainfall data and observed landslides and/or river discharges. Effectiveness of such systems could be improved by reliable quantitative rainfall prediction, which can allow gaining larger lead-times. Analysis of on-site recorded rainfall height time series represents the most effective approach for a reliable prediction of local temporal evolution of rainfall. Hydrological time series analysis is a widely studied field in hydrology, often carried out by means of autoregressive models, such as AR, ARMA, ARX, ARMAX (e.g. Salas [1992]). Such models gave the best results when applied to the analysis of autocorrelated hydrological time series, like river flow or level time series. Conversely, they are not able to model the behaviour of intermittent time series, like point rainfall height series usually are, especially when recorded with short sampling time intervals. More useful for this issue are the so-called DRIP (Disaggregated Rectangular Intensity Pulse) and NSRP (Neymann-Scott Rectangular Pulse) model [Heneker et al., 2001; Cowpertwait et al., 2002], usually adopted to generate synthetic point rainfall series. In this paper, the DRIP model approach is adopted, in which the sequence of rain storms and dry intervals constituting the structure of rainfall time series is modeled as an alternating renewal process. Final aim of the study is to provide a useful tool to implement an early warning system for hydrogeological risk management. Model calibration has been carried out with hourly rainfall hieght data provided by the rain gauges of Campania Region civil protection agency meteorological warning network. ACKNOWLEDGEMENTS The research was co-financed by the Italian Ministry of University, by means of the PRIN 2006 PRIN program, within the research project entitled ‘Definition of critical rainfall thresholds for destructive landslides for civil protection purposes'. REFERENCES Cowpertwait, P.S.P., Kilsby, C.G. and O'Connell, P.E., 2002. A space-time Neyman-Scott model of rainfall: Empirical analysis of extremes, Water Resources Research, 38(8):1-14. Salas, J.D., 1992. Analysis and modeling of hydrological time series, in D.R. Maidment, ed., Handbook of Hydrology, McGraw-Hill, New York. Heneker, T.M., Lambert, M.F. and Kuczera G., 2001. A point rainfall model for risk-based design, Journal of Hydrology, 247(1-2):54-71.
A validation of ground ambulance pre-hospital times modeled using geographic information systems
2012-01-01
Background Evaluating geographic access to health services often requires determining the patient travel time to a specified service. For urgent care, many research studies have modeled patient pre-hospital time by ground emergency medical services (EMS) using geographic information systems (GIS). The purpose of this study was to determine if the modeling assumptions proposed through prior United States (US) studies are valid in a non-US context, and to use the resulting information to provide revised recommendations for modeling travel time using GIS in the absence of actual EMS trip data. Methods The study sample contained all emergency adult patient trips within the Calgary area for 2006. Each record included four components of pre-hospital time (activation, response, on-scene and transport interval). The actual activation and on-scene intervals were compared with those used in published models. The transport interval was calculated within GIS using the Network Analyst extension of Esri ArcGIS 10.0 and the response interval was derived using previously established methods. These GIS derived transport and response intervals were compared with the actual times using descriptive methods. We used the information acquired through the analysis of the EMS trip data to create an updated model that could be used to estimate travel time in the absence of actual EMS trip records. Results There were 29,765 complete EMS records for scene locations inside the city and 529 outside. The actual median on-scene intervals were longer than the average previously reported by 7–8 minutes. Actual EMS pre-hospital times across our study area were significantly higher than the estimated times modeled using GIS and the original travel time assumptions. Our updated model, although still underestimating the total pre-hospital time, more accurately represents the true pre-hospital time in our study area. Conclusions The widespread use of generalized EMS pre-hospital time assumptions based on US data may not be appropriate in a non-US context. The preference for researchers should be to use actual EMS trip records from the proposed research study area. In the absence of EMS trip data researchers should determine which modeling assumptions more accurately reflect the EMS protocols across their study area. PMID:23033894
O'Mahony, James F; Newall, Anthony T; van Rosmalen, Joost
2015-12-01
Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advice on the correct conversion of transition probabilities in state transition models. Finally, for discounting, analysts should not only follow current guidance and report how discounting was conducted, especially in the case of differential discounting, but also seek to develop an understanding of its rationale. Our overall recommendations are that analysts explicitly state and justify their modelling choices regarding time and consider how alternative choices may impact on results.
Analysis hierarchical model for discrete event systems
NASA Astrophysics Data System (ADS)
Ciortea, E. M.
2015-11-01
The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.
NASA Astrophysics Data System (ADS)
Kandel, D. D.; Western, A. W.; Grayson, R. B.
2004-12-01
Mismatches in scale between the fundamental processes, the model and supporting data are a major limitation in hydrologic modelling. Surface runoff generation via infiltration excess and the process of soil erosion are fundamentally short time-scale phenomena and their average behaviour is mostly determined by the short time-scale peak intensities of rainfall. Ideally, these processes should be simulated using time-steps of the order of minutes to appropriately resolve the effect of rainfall intensity variations. However, sub-daily data support is often inadequate and the processes are usually simulated by calibrating daily (or even coarser) time-step models. Generally process descriptions are not modified but rather effective parameter values are used to account for the effect of temporal lumping, assuming that the effect of the scale mismatch can be counterbalanced by tuning the parameter values at the model time-step of interest. Often this results in parameter values that are difficult to interpret physically. A similar approach is often taken spatially. This is problematic as these processes generally operate or interact non-linearly. This indicates a need for better techniques to simulate sub-daily processes using daily time-step models while still using widely available daily information. A new method applicable to many rainfall-runoff-erosion models is presented. The method is based on temporal scaling using statistical distributions of rainfall intensity to represent sub-daily intensity variations in a daily time-step model. This allows the effect of short time-scale nonlinear processes to be captured while modelling at a daily time-step, which is often attractive due to the wide availability of daily forcing data. The approach relies on characterising the rainfall intensity variation within a day using a cumulative distribution function (cdf). This cdf is then modified by various linear and nonlinear processes typically represented in hydrological and erosion models. The statistical description of sub-daily variability is thus propagated through the model, allowing the effects of variability to be captured in the simulations. This results in cdfs of various fluxes, the integration of which over a day gives respective daily totals. Using 42-plot-years of surface runoff and soil erosion data from field studies in different environments from Australia and Nepal, simulation results from this cdf approach are compared with the sub-hourly (2-minute for Nepal and 6-minute for Australia) and daily models having similar process descriptions. Significant improvements in the simulation of surface runoff and erosion are achieved, compared with a daily model that uses average daily rainfall intensities. The cdf model compares well with a sub-hourly time-step model. This suggests that the approach captures the important effects of sub-daily variability while utilizing commonly available daily information. It is also found that the model parameters are more robustly defined using the cdf approach compared with the effective values obtained at the daily scale. This suggests that the cdf approach may offer improved model transferability spatially (to other areas) and temporally (to other periods).
Study on residual discharge time of lead-acid battery based on fitting method
NASA Astrophysics Data System (ADS)
Liu, Bing; Yu, Wangwang; Jin, Yueqiang; Wang, Shuying
2017-05-01
This paper use the method of fitting to discuss the data of C problem of mathematical modeling in 2016, the residual discharge time model of lead-acid battery with 20A,30A,…,100A constant current discharge is obtained, and the discharge time model of discharge under arbitrary constant current is presented. The mean relative error of the model is calculated to be about 3%, which shows that the model has high accuracy. This model can provide a basis for optimizing the adaptation of power system to the electrical motor vehicle.
Nozaki, Sachiko; Yamaguchi, Masayuki; Lefèvre, Gilbert
2016-07-01
Rivastigmine is an inhibitor of acetylcholinesterases and butyrylcholinesterases for symptomatic treatment of Alzheimer disease and is available as oral and transdermal patch formulations. A dermal absorption pharmacokinetic (PK) model was developed to simulate the plasma concentration-time profile of rivastigmine to answer questions relative to the efficacy and safety risks after misuse of the patch (e.g., longer application than 24 h, multiple patches applied at the same time, and so forth). The model comprised 2 compartments which was a combination of mechanistic dermal absorption model and a basic 1-compartment model. The initial values for the model were determined based on the physicochemical characteristics of rivastigmine and PK parameters after intravenous administration. The model was fitted to the clinical PK profiles after single application of rivastigmine patch to obtain model parameters. The final model was validated by confirming that the simulated concentration-time curves and PK parameters (Cmax and area under the drug plasma concentration-time curve) conformed to the observed values and then was used to simulate the PK profiles of rivastigmine. This work demonstrated that the mechanistic dermal PK model fitted the clinical data well and was able to simulate the PK profile after patch misuse. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Individual-based modelling of population growth and diffusion in discrete time.
Tkachenko, Natalie; Weissmann, John D; Petersen, Wesley P; Lake, George; Zollikofer, Christoph P E; Callegari, Simone
2017-01-01
Individual-based models (IBMs) of human populations capture spatio-temporal dynamics using rules that govern the birth, behavior, and death of individuals. We explore a stochastic IBM of logistic growth-diffusion with constant time steps and independent, simultaneous actions of birth, death, and movement that approaches the Fisher-Kolmogorov model in the continuum limit. This model is well-suited to parallelization on high-performance computers. We explore its emergent properties with analytical approximations and numerical simulations in parameter ranges relevant to human population dynamics and ecology, and reproduce continuous-time results in the limit of small transition probabilities. Our model prediction indicates that the population density and dispersal speed are affected by fluctuations in the number of individuals. The discrete-time model displays novel properties owing to the binomial character of the fluctuations: in certain regimes of the growth model, a decrease in time step size drives the system away from the continuum limit. These effects are especially important at local population sizes of <50 individuals, which largely correspond to group sizes of hunter-gatherers. As an application scenario, we model the late Pleistocene dispersal of Homo sapiens into the Americas, and discuss the agreement of model-based estimates of first-arrival dates with archaeological dates in dependence of IBM model parameter settings.
Stochastic Ground Water Flow Simulation with a Fracture Zone Continuum Model
Langevin, C.D.
2003-01-01
A method is presented for incorporating the hydraulic effects of vertical fracture zones into two-dimensional cell-based continuum models of ground water flow and particle tracking. High hydraulic conductivity features are used in the model to represent fracture zones. For fracture zones that are not coincident with model rows or columns, an adjustment is required for the hydraulic conductivity value entered into the model cells to compensate for the longer flowpath through the model grid. A similar adjustment is also required for simulated travel times through model cells. A travel time error of less than 8% can occur for particles moving through fractures with certain orientations. The fracture zone continuum model uses stochastically generated fracture zone networks and Monte Carlo analysis to quantify uncertainties with simulated advective travel times. An approach is also presented for converting an equivalent continuum model into a fracture zone continuum model by establishing the contribution of matrix block transmissivity to the bulk transmissivity of the aquifer. The methods are used for a case study in west-central Florida to quantify advective travel times from a potential wetland rehydration site to a municipal supply wellfield. Uncertainties in advective travel times are assumed to result from the presence of vertical fracture zones, commonly observed on aerial photographs as photolineaments.
NASA Astrophysics Data System (ADS)
Ceballos-Núñez, Verónika; Richardson, Andrew; Sierra, Carlos
2017-04-01
The global carbon cycle is strongly controlled by the source/sink strength of vegetation as well as the capacity of terrestrial ecosystems to retain this carbon. However, it is uncertain how some vegetation dynamics such as the allocation of carbon to different ecosystem compartments should be represented in models. The assumptions behind model structures may result in highly divergent model predictions. Here, we asses model performance by calculating the age of the carbon in the system and in each compartment, and the overall transit time of C in the system. We used these diagnostics to assess the influence of three different carbon allocation schemes on the rates of C cycling in vegetation. First, we used published measurements of ecosystem C compartments from the Harvard Forest Environmental Measurement Site to find the best set of parameters for the different model structures. Second, we calculated C stocks, respiration fluxes, radiocarbon values, ages, and transit times. We found a good fit of the three model structures to the available data, but the time series of C in foliage and wood need to be complemented with other ecosystem compartments in order to reduce the high parameter collinearity that we observed and reduce model equifinality. Differences in model structures had a small impact on predicting ecosystem C compartments, but overall they resulted in very different predictions of age and transit time distributions. In particular, the inclusion of a storage compartment had an important impact on predicting system ages and transit times. In the case of the models with 1 or 2 storage compartments, the age of carbon in the system and in each of the compartments was distributed more towards younger ages than in the model that had no storage; the mean system age of these two models with storage was 80 years younger than in the model without storage. As expected from these age distributions, the mean transit time for the two models with storage compartments was 50 years faster than for the model without storage. These results suggest that ages and transit times, which can be indirectly measured using isotope tracers, serve as important diagnostics of model structure and could largely help to reduce uncertainties in model predictions. Furthermore, by considering age and transit times of C in vegetation compartments as distributions, not only their mean values, we obtain additional insights on the temporal dynamics of carbon use, storage, and allocation to plant parts, which not only depends on the rate at which this C is transferred in and out of the compartments, but also on the stochastic nature of the process itself.
Time limited field of regard search
NASA Astrophysics Data System (ADS)
Flug, Eric; Maurer, Tana; Nguyen, Oanh-Tho
2005-05-01
Recent work by the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has led to the Time-Limited Search (TLS) model, which has given new formulations for the field of view (FOV) search times. The next step in the evaluation of the overall search model (ACQUIRE) is to apply these parameters to the field of regard (FOR) model. Human perception experiments were conducted using synthetic imagery developed at NVESD. The experiments were competitive player-on-player search tests with the intention of imposing realistic time constraints on the observers. FOR detection probabilities, search times, and false alarm data are analyzed and compared to predictions using both the TLS model and ACQUIRE.
A Joint Modeling Approach for Reaction Time and Accuracy in Psycholinguistic Experiments
ERIC Educational Resources Information Center
Loeys, T.; Rosseel, Y.; Baten, K.
2011-01-01
In the psycholinguistic literature, reaction times and accuracy can be analyzed separately using mixed (logistic) effects models with crossed random effects for item and subject. Given the potential correlation between these two outcomes, a joint model for the reaction time and accuracy may provide further insight. In this paper, a Bayesian…
Small Sample Properties of Bayesian Multivariate Autoregressive Time Series Models
ERIC Educational Resources Information Center
Price, Larry R.
2012-01-01
The aim of this study was to compare the small sample (N = 1, 3, 5, 10, 15) performance of a Bayesian multivariate vector autoregressive (BVAR-SEM) time series model relative to frequentist power and parameter estimation bias. A multivariate autoregressive model was developed based on correlated autoregressive time series vectors of varying…
Path integral for equities: Dynamic correlation and empirical analysis
NASA Astrophysics Data System (ADS)
Baaquie, Belal E.; Cao, Yang; Lau, Ada; Tang, Pan
2012-02-01
This paper develops a model to describe the unequal time correlation between rate of returns of different stocks. A non-trivial fourth order derivative Lagrangian is defined to provide an unequal time propagator, which can be fitted to the market data. A calibration algorithm is designed to find the empirical parameters for this model and different de-noising methods are used to capture the signals concealed in the rate of return. The detailed results of this Gaussian model show that the different stocks can have strong correlation and the empirical unequal time correlator can be described by the model's propagator. This preliminary study provides a novel model for the correlator of different instruments at different times.
Yu, Sungduk; Pritchard, Michael S.
2015-12-17
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Sungduk; Pritchard, Michael S.
The effect of global climate model (GCM) time step—which also controls how frequently global and embedded cloud resolving scales are coupled—is examined in the Superparameterized Community Atmosphere Model ver 3.0. Systematic bias reductions of time-mean shortwave cloud forcing (~10 W/m 2) and longwave cloud forcing (~5 W/m 2) occur as scale coupling frequency increases, but with systematically increasing rainfall variance and extremes throughout the tropics. An overarching change in the vertical structure of deep tropical convection, favoring more bottom-heavy deep convection as a global model time step is reduced may help orchestrate these responses. The weak temperature gradient approximation ismore » more faithfully satisfied when a high scale coupling frequency (a short global model time step) is used. These findings are distinct from the global model time step sensitivities of conventionally parameterized GCMs and have implications for understanding emergent behaviors of multiscale deep convective organization in superparameterized GCMs. Lastly, the results may also be useful for helping to tune them.« less
Method of calculating tsunami travel times in the Andaman Sea region
Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G.
2014-01-01
A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region. PMID:25741129
Method of calculating tsunami travel times in the Andaman Sea region.
Kietpawpan, Monte; Visuthismajarn, Parichart; Tanavud, Charlchai; Robson, Mark G
2008-07-01
A new model to calculate tsunami travel times in the Andaman Sea region has been developed. The model specifically provides more accurate travel time estimates for tsunamis propagating to Patong Beach on the west coast of Phuket, Thailand. More generally, the model provides better understanding of the influence of the accuracy and resolution of bathymetry data on the accuracy of travel time calculations. The dynamic model is based on solitary wave theory, and a lookup function is used to perform bilinear interpolation of bathymetry along the ray trajectory. The model was calibrated and verified using data from an echosounder record, tsunami photographs, satellite altimetry records, and eyewitness accounts of the tsunami on 26 December 2004. Time differences for 12 representative targets in the Andaman Sea and the Indian Ocean regions were calculated. The model demonstrated satisfactory time differences (<2 min/h), despite the use of low resolution bathymetry (ETOPO2v2). To improve accuracy, the dynamics of wave elevation and a velocity correction term must be considered, particularly for calculations in the nearshore region.
Identifying demand for health resources using waiting times information.
Blundell, R; Windmeijer, F
2000-09-01
In this paper the differences in average waiting times are utilized to identify the determinants of demand for health services. The equilibrium waiting time framework is used, but the full equilibrium assumption is relaxed by selecting areas with low waiting times and by estimating a (semi-)parametric selection model. Determinants of supply are used as instruments for the endogeneity of waiting times. A model for the demand for acute services at the ward level in the UK is estimated. The model estimates, and their implications for health service allocations in the UK, are contrasted against more standard allocation models. The present results show that it is critically important to account for rationing by waiting times when identifying needs from care utilization data. Copyright 2000 John Wiley & Sons, Ltd.
On modeling of integrated communication and control systems
NASA Technical Reports Server (NTRS)
Liou, Luen-Woei; Ray, Asok
1990-01-01
The mathematical modeling scheme proposed by Ray and Halevi (1988) for integrated communication and control systems is considered analytically, with an emphasis on the effect of introducing varying and distributed time delays to account for asynchronous time-division multiplexing in the communication part of the system. Ray and Halevi applied a state-transition concept to transform the original continuous-time model into a discrete-time model; the same approach was used by Kalman and Bertram (1959) to model various types of sampled data systems which are not subject to induced delays. The relationship between the two modeling schemes is explored, and it is shown that, although the Kalman-Bertram method has the advantage of a unified approach, it becomes inconvenient when varying delays appear in the control loop.
Niroomandi, S; Alfaro, I; Cueto, E; Chinesta, F
2012-01-01
Model reduction techniques have shown to constitute a valuable tool for real-time simulation in surgical environments and other fields. However, some limitations, imposed by real-time constraints, have not yet been overcome. One of such limitations is the severe limitation in time (established in 500Hz of frequency for the resolution) that precludes the employ of Newton-like schemes for solving non-linear models as the ones usually employed for modeling biological tissues. In this work we present a technique able to deal with geometrically non-linear models, based on the employ of model reduction techniques, together with an efficient non-linear solver. Examples of the performance of the technique over some examples will be given. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff
2016-01-01
The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines.
NASA Astrophysics Data System (ADS)
Wright, Ashley J.; Walker, Jeffrey P.; Pauwels, Valentijn R. N.
2017-08-01
Floods are devastating natural hazards. To provide accurate, precise, and timely flood forecasts, there is a need to understand the uncertainties associated within an entire rainfall time series, even when rainfall was not observed. The estimation of an entire rainfall time series and model parameter distributions from streamflow observations in complex dynamic catchments adds skill to current areal rainfall estimation methods, allows for the uncertainty of entire rainfall input time series to be considered when estimating model parameters, and provides the ability to improve rainfall estimates from poorly gauged catchments. Current methods to estimate entire rainfall time series from streamflow records are unable to adequately invert complex nonlinear hydrologic systems. This study aims to explore the use of wavelets in the estimation of rainfall time series from streamflow records. Using the Discrete Wavelet Transform (DWT) to reduce rainfall dimensionality for the catchment of Warwick, Queensland, Australia, it is shown that model parameter distributions and an entire rainfall time series can be estimated. Including rainfall in the estimation process improves streamflow simulations by a factor of up to 1.78. This is achieved while estimating an entire rainfall time series, inclusive of days when none was observed. It is shown that the choice of wavelet can have a considerable impact on the robustness of the inversion. Combining the use of a likelihood function that considers rainfall and streamflow errors with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
NASA Astrophysics Data System (ADS)
Tenkès, Lucille-Marie; Hollerbach, Rainer; Kim, Eun-jin
2017-12-01
A probabilistic description is essential for understanding growth processes in non-stationary states. In this paper, we compute time-dependent probability density functions (PDFs) in order to investigate stochastic logistic and Gompertz models, which are two of the most popular growth models. We consider different types of short-correlated multiplicative and additive noise sources and compare the time-dependent PDFs in the two models, elucidating the effects of the additive and multiplicative noises on the form of PDFs. We demonstrate an interesting transition from a unimodal to a bimodal PDF as the multiplicative noise increases for a fixed value of the additive noise. A much weaker (leaky) attractor in the Gompertz model leads to a significant (singular) growth of the population of a very small size. We point out the limitation of using stationary PDFs, mean value and variance in understanding statistical properties of the growth in non-stationary states, highlighting the importance of time-dependent PDFs. We further compare these two models from the perspective of information change that occurs during the growth process. Specifically, we define an infinitesimal distance at any time by comparing two PDFs at times infinitesimally apart and sum these distances in time. The total distance along the trajectory quantifies the total number of different states that the system undergoes in time, and is called the information length. We show that the time-evolution of the two models become more similar when measured in units of the information length and point out the merit of using the information length in unifying and understanding the dynamic evolution of different growth processes.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
Modeling and simulation of M/M/c queuing pharmacy system with adjustable parameters
NASA Astrophysics Data System (ADS)
Rashida, A. R.; Fadzli, Mohammad; Ibrahim, Safwati; Goh, Siti Rohana
2016-02-01
This paper studies a discrete event simulation (DES) as a computer based modelling that imitates a real system of pharmacy unit. M/M/c queuing theo is used to model and analyse the characteristic of queuing system at the pharmacy unit of Hospital Tuanku Fauziah, Kangar in Perlis, Malaysia. The input of this model is based on statistical data collected for 20 working days in June 2014. Currently, patient waiting time of pharmacy unit is more than 15 minutes. The actual operation of the pharmacy unit is a mixed queuing server with M/M/2 queuing model where the pharmacist is referred as the server parameters. DES approach and ProModel simulation software is used to simulate the queuing model and to propose the improvement for queuing system at this pharmacy system. Waiting time for each server is analysed and found out that Counter 3 and 4 has the highest waiting time which is 16.98 and 16.73 minutes. Three scenarios; M/M/3, M/M/4 and M/M/5 are simulated and waiting time for actual queuing model and experimental queuing model are compared. The simulation results show that by adding the server (pharmacist), it will reduce patient waiting time to a reasonable improvement. Almost 50% average patient waiting time is reduced when one pharmacist is added to the counter. However, it is not necessary to fully utilize all counters because eventhough M/M/4 and M/M/5 produced more reduction in patient waiting time, but it is ineffective since Counter 5 is rarely used.
Dong, Zhanshan; Danilevskaya, Olga; Abadie, Tabare; Messina, Carlos; Coles, Nathan; Cooper, Mark
2012-01-01
The transition from the vegetative to reproductive development is a critical event in the plant life cycle. The accurate prediction of flowering time in elite germplasm is important for decisions in maize breeding programs and best agronomic practices. The understanding of the genetic control of flowering time in maize has significantly advanced in the past decade. Through comparative genomics, mutant analysis, genetic analysis and QTL cloning, and transgenic approaches, more than 30 flowering time candidate genes in maize have been revealed and the relationships among these genes have been partially uncovered. Based on the knowledge of the flowering time candidate genes, a conceptual gene regulatory network model for the genetic control of flowering time in maize is proposed. To demonstrate the potential of the proposed gene regulatory network model, a first attempt was made to develop a dynamic gene network model to predict flowering time of maize genotypes varying for specific genes. The dynamic gene network model is composed of four genes and was built on the basis of gene expression dynamics of the two late flowering id1 and dlf1 mutants, the early flowering landrace Gaspe Flint and the temperate inbred B73. The model was evaluated against the phenotypic data of the id1 dlf1 double mutant and the ZMM4 overexpressed transgenic lines. The model provides a working example that leverages knowledge from model organisms for the utilization of maize genomic information to predict a whole plant trait phenotype, flowering time, of maize genotypes.
An online spatio-temporal prediction model for dengue fever epidemic in Kaohsiung,Taiwan
NASA Astrophysics Data System (ADS)
Cheng, Ming-Hung; Yu, Hwa-Lung; Angulo, Jose; Christakos, George
2013-04-01
Dengue Fever (DF) is one of the most serious vector-borne infectious diseases in tropical and subtropical areas. DF epidemics occur in Taiwan annually especially during summer and fall seasons. Kaohsiung city has been one of the major DF hotspots in decades. The emergence and re-emergence of the DF epidemic is complex and can be influenced by various factors including space-time dynamics of human and vector populations and virus serotypes as well as the associated uncertainties. This study integrates a stochastic space-time "Susceptible-Infected-Recovered" model under Bayesian maximum entropy framework (BME-SIR) to perform real-time prediction of disease diffusion across space-time. The proposed model is applied for spatiotemporal prediction of the DF epidemic at Kaohsiung city during 2002 when the historical series of high DF cases was recorded. The online prediction by BME-SIR model updates the parameters of SIR model and infected cases across districts over time. Results show that the proposed model is rigorous to initial guess of unknown model parameters, i.e. transmission and recovery rates, which can depend upon the virus serotypes and various human interventions. This study shows that spatial diffusion can be well characterized by BME-SIR model, especially at the district surrounding the disease outbreak locations. The prediction performance at DF hotspots, i.e. Cianjhen and Sanmin, can be degraded due to the implementation of various disease control strategies during the epidemics. The proposed online disease prediction BME-SIR model can provide the governmental agency with a valuable reference to timely identify, control, and efficiently prevent DF spread across space-time.
Ramos-Goñi, Juan Manuel; Rivero-Arias, Oliver; Errea, María; Stolk, Elly A; Herdman, Michael; Cabasés, Juan Manuel
2013-07-01
To evaluate two different methods to obtain a dead (0)--full health (1) scale for EQ-5D-5L valuation studies when using discrete choice (DC) modeling. The study was carried out among 400 respondents from Barcelona who were representative of the Spanish population in terms of age, sex, and level of education. The DC design included 50 pairs of health states in five blocks. Participants were forced to choose between two EQ-5D-5L states (A and B). Two extra questions concerned whether A and B were considered worse than dead. Each participant performed ten choice exercises. In addition, values were collected using lead-time trade-off (lead-time TTO), for which 100 states in ten blocks were selected. Each participant performed five lead-time TTO exercises. These consisted of DC models offering the health state 'dead' as one of the choices--for which all participants' responses were used (DCdead)--and a model that included only the responses of participants who chose at least one state as worse than dead (WTD) (DCWTD). The study also estimated DC models rescaled with lead-time TTO data and a lead-time TTO linear model. The DC(dead) and DCWTD models produced relatively similar results, although the coefficients in the DCdead model were slightly lower. The DC model rescaled with lead-time TTO data produced higher utility decrements. Lead-time TTO produced the highest utility decrements. The incorporation of the state 'dead' in the DC models produces results in concordance with DC models that do not include 'dead'.
Hydra effects in discrete-time models of stable communities.
Cortez, Michael H
2016-12-21
A species exhibits a hydra effect when, counter-intuitively, increased mortality of the species causes an increase in its abundance. Hydra effects have been studied in many continuous time (differential equation) multispecies models, but only rarely have hydra effects been observed in or studied with discrete time (difference equation) multispecies models. In addition most discrete time theory focuses on single-species models. Thus, it is unclear what unifying characteristics determine when hydra effects arise in discrete time models. Here, using discrete time multispecies models (where total abundance is the single variable describing each population), I show that a species exhibits a hydra effect in a stable system only when fixing that species' density at its equilibrium density destabilizes the system. This general characteristic is referred to as subsystem instability. I apply this result to two-species models and identify specific mechanisms that cause hydra effects in stable communities, e.g., in host--parasitoid models, host Allee effects and saturating parasitoid functional responses can cause parasitoid hydra effects. I discuss how the general characteristic can be used to identify mechanisms causing hydra effects in communities with three or more species. I also show that the condition for hydra effects at stable equilibria implies the system is reactive (i.e., density perturbations can grow before ultimately declining). This study extends previous work on conditions for hydra effects in single-species models by identifying necessary conditions for stable systems and sufficient conditions for cyclic systems. In total, these results show that hydra effects can arise in many more communities than previously appreciated and that hydra effects were present, but unrecognized, in previously studied discrete time models. Copyright © 2016 Elsevier Ltd. All rights reserved.
Wang, Junhua; Sun, Shuaiyi; Fang, Shouen; Fu, Ting; Stipancic, Joshua
2017-02-01
This paper aims to both identify the factors affecting driver drowsiness and to develop a real-time drowsy driving probability model based on virtual Location-Based Services (LBS) data obtained using a driving simulator. A driving simulation experiment was designed and conducted using 32 participant drivers. Collected data included the continuous driving time before detection of drowsiness and virtual LBS data related to temperature, time of day, lane width, average travel speed, driving time in heavy traffic, and driving time on different roadway types. Demographic information, such as nap habit, age, gender, and driving experience was also collected through questionnaires distributed to the participants. An Accelerated Failure Time (AFT) model was developed to estimate the driving time before detection of drowsiness. The results of the AFT model showed driving time before drowsiness was longer during the day than at night, and was longer at lower temperatures. Additionally, drivers who identified as having a nap habit were more vulnerable to drowsiness. Generally, higher average travel speeds were correlated to a higher risk of drowsy driving, as were longer periods of low-speed driving in traffic jam conditions. Considering different road types, drivers felt drowsy more quickly on freeways compared to other facilities. The proposed model provides a better understanding of how driver drowsiness is influenced by different environmental and demographic factors. The model can be used to provide real-time data for the LBS-based drowsy driving warning system, improving past methods based only on a fixed driving. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K.
2015-08-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing the value of water resources and benefits gained through hydropower generation. Improving hourly reservoir inflow forecasts over a 24 h lead time is considered within the day-ahead (Elspot) market of the Nordic exchange market. A complementary modelling framework presents an approach for improving real-time forecasting without needing to modify the pre-existing forecasting model, but instead formulating an independent additive or complementary model that captures the structure the existing operational model may be missing. We present here the application of this principle for issuing improved hourly inflow forecasts into hydropower reservoirs over extended lead times, and the parameter estimation procedure reformulated to deal with bias, persistence and heteroscedasticity. The procedure presented comprises an error model added on top of an unalterable constant parameter conceptual model. This procedure is applied in the 207 km2 Krinsvatn catchment in central Norway. The structure of the error model is established based on attributes of the residual time series from the conceptual model. Besides improving forecast skills of operational models, the approach estimates the uncertainty in the complementary model structure and produces probabilistic inflow forecasts that entrain suitable information for reducing uncertainty in the decision-making processes in hydropower systems operation. Deterministic and probabilistic evaluations revealed an overall significant improvement in forecast accuracy for lead times up to 17 h. Evaluation of the percentage of observations bracketed in the forecasted 95 % confidence interval indicated that the degree of success in containing 95 % of the observations varies across seasons and hydrologic years.
Real-time individualization of the unified model of performance.
Liu, Jianbo; Ramakrishnan, Sridhar; Laxminarayan, Srinivas; Balkin, Thomas J; Reifman, Jaques
2017-12-01
Existing mathematical models for predicting neurobehavioural performance are not suited for mobile computing platforms because they cannot adapt model parameters automatically in real time to reflect individual differences in the effects of sleep loss. We used an extended Kalman filter to develop a computationally efficient algorithm that continually adapts the parameters of the recently developed Unified Model of Performance (UMP) to an individual. The algorithm accomplishes this in real time as new performance data for the individual become available. We assessed the algorithm's performance by simulating real-time model individualization for 18 subjects subjected to 64 h of total sleep deprivation (TSD) and 7 days of chronic sleep restriction (CSR) with 3 h of time in bed per night, using psychomotor vigilance task (PVT) data collected every 2 h during wakefulness. This UMP individualization process produced parameter estimates that progressively approached the solution produced by a post-hoc fitting of model parameters using all data. The minimum number of PVT measurements needed to individualize the model parameters depended upon the type of sleep-loss challenge, with ~30 required for TSD and ~70 for CSR. However, model individualization depended upon the overall duration of data collection, yielding increasingly accurate model parameters with greater number of days. Interestingly, reducing the PVT sampling frequency by a factor of two did not notably hamper model individualization. The proposed algorithm facilitates real-time learning of an individual's trait-like responses to sleep loss and enables the development of individualized performance prediction models for use in a mobile computing platform. © 2017 European Sleep Research Society.
Lovejoy, S; de Lima, M I P
2015-07-01
Over the range of time scales from about 10 days to 30-100 years, in addition to the familiar weather and climate regimes, there is an intermediate "macroweather" regime characterized by negative temporal fluctuation exponents: implying that fluctuations tend to cancel each other out so that averages tend to converge. We show theoretically and numerically that macroweather precipitation can be modeled by a stochastic weather-climate model (the Climate Extended Fractionally Integrated Flux, model, CEFIF) first proposed for macroweather temperatures and we show numerically that a four parameter space-time CEFIF model can approximately reproduce eight or so empirical space-time exponents. In spite of this success, CEFIF is theoretically and numerically difficult to manage. We therefore propose a simplified stochastic model in which the temporal behavior is modeled as a fractional Gaussian noise but the spatial behaviour as a multifractal (climate) cascade: a spatial extension of the recently introduced ScaLIng Macroweather Model, SLIMM. Both the CEFIF and this spatial SLIMM model have a property often implicitly assumed by climatologists that climate statistics can be "homogenized" by normalizing them with the standard deviation of the anomalies. Physically, it means that the spatial macroweather variability corresponds to different climate zones that multiplicatively modulate the local, temporal statistics. This simplified macroweather model provides a framework for macroweather forecasting that exploits the system's long range memory and spatial correlations; for it, the forecasting problem has been solved. We test this factorization property and the model with the help of three centennial, global scale precipitation products that we analyze jointly in space and in time.
Code for Calculating Regional Seismic Travel Time
DOE Office of Scientific and Technical Information (OSTI.GOV)
BALLARD, SANFORD; HIPP, JAMES; & BARKER, GLENN
The RSTT software computes predictions of the travel time of seismic energy traveling from a source to a receiver through 2.5D models of the seismic velocity distribution within the Earth. The two primary applications for the RSTT library are tomographic inversion studies and seismic event location calculations. In tomographic inversions studies, a seismologist begins with number of source-receiver travel time observations and an initial starting model of the velocity distribution within the Earth. A forward travel time calculator, such as the RSTT library, is used to compute predictions of each observed travel time and all of the residuals (observed minusmore » predicted travel time) are calculated. The Earth model is then modified in some systematic way with the goal of minimizing the residuals. The Earth model obtained in this way is assumed to be a better model than the starting model if it has lower residuals. The other major application for the RSTT library is seismic event location. Given an Earth model, an initial estimate of the location of a seismic event, and some number of observations of seismic travel time thought to have originated from that event, location codes systematically modify the estimate of the location of the event with the goal of minimizing the difference between the observed and predicted travel times. The second application, seismic event location, is routinely implemented by the military as part of its effort to monitor the Earth for nuclear tests conducted by foreign countries.« less
Numerical modeling of space-time wave extremes using WAVEWATCH III
NASA Astrophysics Data System (ADS)
Barbariol, Francesco; Alves, Jose-Henrique G. M.; Benetazzo, Alvise; Bergamasco, Filippo; Bertotti, Luciana; Carniel, Sandro; Cavaleri, Luigi; Y. Chao, Yung; Chawla, Arun; Ricchi, Antonio; Sclavo, Mauro; Tolman, Hendrik
2017-04-01
A novel implementation of parameters estimating the space-time wave extremes within the spectral wave model WAVEWATCH III (WW3) is presented. The new output parameters, available in WW3 version 5.16, rely on the theoretical model of Fedele (J Phys Oceanogr 42(9):1601-1615, 2012) extended by Benetazzo et al. (J Phys Oceanogr 45(9):2261-2275, 2015) to estimate the maximum second-order nonlinear crest height over a given space-time region. In order to assess the wave height associated to the maximum crest height and the maximum wave height (generally different in a broad-band stormy sea state), the linear quasi-determinism theory of Boccotti (2000) is considered. The new WW3 implementation is tested by simulating sea states and space-time extremes over the Mediterranean Sea (forced by the wind fields produced by the COSMO-ME atmospheric model). Model simulations are compared to space-time wave maxima observed on March 10th, 2014, in the northern Adriatic Sea (Italy), by a stereo camera system installed on-board the "Acqua Alta" oceanographic tower. Results show that modeled space-time extremes are in general agreement with observations. Differences are mostly ascribed to the accuracy of the wind forcing and, to a lesser extent, to the approximations introduced in the space-time extremes parameterizations. Model estimates are expected to be even more accurate over areas larger than the mean wavelength (for instance, the model grid size).
A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice
Dai, Junyi; Busemeyer, Jerome R.
2014-01-01
Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188
An online spatiotemporal prediction model for dengue fever epidemic in Kaohsiung (Taiwan).
Yu, Hwa-Lung; Angulo, José M; Cheng, Ming-Hung; Wu, Jiaping; Christakos, George
2014-05-01
The emergence and re-emergence of disease epidemics is a complex question that may be influenced by diverse factors, including the space-time dynamics of human populations, environmental conditions, and associated uncertainties. This study proposes a stochastic framework to integrate space-time dynamics in the form of a Susceptible-Infected-Recovered (SIR) model, together with uncertain disease observations, into a Bayesian maximum entropy (BME) framework. The resulting model (BME-SIR) can be used to predict space-time disease spread. Specifically, it was applied to obtain a space-time prediction of the dengue fever (DF) epidemic that took place in Kaohsiung City (Taiwan) during 2002. In implementing the model, the SIR parameters were continually updated and information on new cases of infection was incorporated. The results obtained show that the proposed model is rigorous to user-specified initial values of unknown model parameters, that is, transmission and recovery rates. In general, this model provides a good characterization of the spatial diffusion of the DF epidemic, especially in the city districts proximal to the location of the outbreak. Prediction performance may be affected by various factors, such as virus serotypes and human intervention, which can change the space-time dynamics of disease diffusion. The proposed BME-SIR disease prediction model can provide government agencies with a valuable reference for the timely identification, control, and prevention of DF spread in space and time. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Seismic hazard assessment over time: Modelling earthquakes in Taiwan
NASA Astrophysics Data System (ADS)
Chan, Chung-Han; Wang, Yu; Wang, Yu-Ju; Lee, Ya-Ting
2017-04-01
To assess the seismic hazard with temporal change in Taiwan, we develop a new approach, combining both the Brownian Passage Time (BPT) model and the Coulomb stress change, and implement the seismogenic source parameters by the Taiwan Earthquake Model (TEM). The BPT model was adopted to describe the rupture recurrence intervals of the specific fault sources, together with the time elapsed since the last fault-rupture to derive their long-term rupture probability. We also evaluate the short-term seismicity rate change based on the static Coulomb stress interaction between seismogenic sources. By considering above time-dependent factors, our new combined model suggests an increased long-term seismic hazard in the vicinity of active faults along the western Coastal Plain and the Longitudinal Valley, where active faults have short recurrence intervals and long elapsed time since their last ruptures, and/or short-term elevated hazard levels right after the occurrence of large earthquakes due to the stress triggering effect. The stress enhanced by the February 6th, 2016, Meinong ML 6.6 earthquake also significantly increased rupture probabilities of several neighbouring seismogenic sources in Southwestern Taiwan and raised hazard level in the near future. Our approach draws on the advantage of incorporating long- and short-term models, to provide time-dependent earthquake probability constraints. Our time-dependent model considers more detailed information than any other published models. It thus offers decision-makers and public officials an adequate basis for rapid evaluations of and response to future emergency scenarios such as victim relocation and sheltering.
Bramness, Jørgen G; Walby, Fredrik A; Morken, Gunnar; Røislien, Jo
2015-08-01
Seasonal variation in the number of suicides has long been acknowledged. It has been suggested that this seasonality has declined in recent years, but studies have generally used statistical methods incapable of confirming this. We examined all suicides occurring in Norway during 1969-2007 (more than 20,000 suicides in total) to establish whether seasonality decreased over time. Fitting of additive Fourier Poisson time-series regression models allowed for formal testing of a possible linear decrease in seasonality, or a reduction at a specific point in time, while adjusting for a possible smooth nonlinear long-term change without having to categorize time into discrete yearly units. The models were compared using Akaike's Information Criterion and analysis of variance. A model with a seasonal pattern was significantly superior to a model without one. There was a reduction in seasonality during the period. Both the model assuming a linear decrease in seasonality and the model assuming a change at a specific point in time were both superior to a model assuming constant seasonality, thus confirming by formal statistical testing that the magnitude of the seasonality in suicides has diminished. The additive Fourier Poisson time-series regression model would also be useful for studying other temporal phenomena with seasonal components. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Use of models to map potential capture of surface water
Leake, Stanley A.
2006-01-01
The effects of ground-water withdrawals on surface-water resources and riparian vegetation have become important considerations in water-availability studies. Ground water withdrawn by a well initially comes from storage around the well, but with time can eventually increase inflow to the aquifer and (or) decrease natural outflow from the aquifer. This increased inflow and decreased outflow is referred to as “capture.” For a given time, capture can be expressed as a fraction of withdrawal rate that is accounted for as increased rates of inflow and decreased rates of outflow. The time frames over which capture might occur at different locations commonly are not well understood by resource managers. A ground-water model, however, can be used to map potential capture for areas and times of interest. The maps can help managers visualize the possible timing of capture over large regions. The first step in the procedure to map potential capture is to run a ground-water model in steady-state mode without withdrawals to establish baseline total flow rates at all sources and sinks. The next step is to select a time frame and appropriate withdrawal rate for computing capture. For regional aquifers, time frames of decades to centuries may be appropriate. The model is then run repeatedly in transient mode, each run with one well in a different model cell in an area of interest. Differences in inflow and outflow rates from the baseline conditions for each model run are computed and saved. The differences in individual components are summed and divided by the withdrawal rate to obtain a single capture fraction for each cell. Values are contoured to depict capture fractions for the time of interest. Considerations in carrying out the analysis include use of realistic physical boundaries in the model, understanding the degree of linearity of the model, selection of an appropriate time frame and withdrawal rate, and minimizing error in the global mass balance of the model.
Modeling Rabbit Responses to Single and Multiple Aerosol ...
Journal Article Survival models are developed here to predict response and time-to-response for mortality in rabbits following exposures to single or multiple aerosol doses of Bacillus anthracis spores. Hazard function models were developed for a multiple dose dataset to predict the probability of death through specifying dose-response functions and the time between exposure and the time-to-death (TTD). Among the models developed, the best-fitting survival model (baseline model) has an exponential dose-response model with a Weibull TTD distribution. Alternative models assessed employ different underlying dose-response functions and use the assumption that, in a multiple dose scenario, earlier doses affect the hazard functions of each subsequent dose. In addition, published mechanistic models are analyzed and compared with models developed in this paper. None of the alternative models that were assessed provided a statistically significant improvement in fit over the baseline model. The general approach utilizes simple empirical data analysis to develop parsimonious models with limited reliance on mechanistic assumptions. The baseline model predicts TTDs consistent with reported results from three independent high-dose rabbit datasets. More accurate survival models depend upon future development of dose-response datasets specifically designed to assess potential multiple dose effects on response and time-to-response. The process used in this paper to dev
Distributed Time Synchronization Algorithms and Opinion Dynamics
NASA Astrophysics Data System (ADS)
Manita, Anatoly; Manita, Larisa
2018-01-01
We propose new deterministic and stochastic models for synchronization of clocks in nodes of distributed networks. An external accurate time server is used to ensure convergence of the node clocks to the exact time. These systems have much in common with mathematical models of opinion formation in multiagent systems. There is a direct analogy between the time server/node clocks pair in asynchronous networks and the leader/follower pair in the context of social network models.
Aircraft model prototypes which have specified handling-quality time histories
NASA Technical Reports Server (NTRS)
Johnson, S. H.
1976-01-01
Several techniques for obtaining linear constant-coefficient airplane models from specified handling-quality time histories are discussed. One technique, the pseudodata method, solves the basic problem, yields specified eigenvalues, and accommodates state-variable transfer-function zero suppression. The method is fully illustrated for a fourth-order stability-axis small-motion model with three lateral handling-quality time histories specified. The FORTRAN program which obtains and verifies the model is included and fully documented.
A simple two-stage model predicts response time distributions.
Carpenter, R H S; Reddi, B A J; Anderson, A J
2009-08-15
The neural mechanisms underlying reaction times have previously been modelled in two distinct ways. When stimuli are hard to detect, response time tends to follow a random-walk model that integrates noisy sensory signals. But studies investigating the influence of higher-level factors such as prior probability and response urgency typically use highly detectable targets, and response times then usually correspond to a linear rise-to-threshold mechanism. Here we show that a model incorporating both types of element in series - a detector integrating noisy afferent signals, followed by a linear rise-to-threshold performing decision - successfully predicts not only mean response times but, much more stringently, the observed distribution of these times and the rate of decision errors over a wide range of stimulus detectability. By reconciling what previously may have seemed to be conflicting theories, we are now closer to having a complete description of reaction time and the decision processes that underlie it.
Reliability model of a monopropellant auxiliary propulsion system
NASA Technical Reports Server (NTRS)
Greenberg, J. S.
1971-01-01
A mathematical model and associated computer code has been developed which computes the reliability of a monopropellant blowdown hydrazine spacecraft auxiliary propulsion system as a function of time. The propulsion system is used to adjust or modify the spacecraft orbit over an extended period of time. The multiple orbit corrections are the multiple objectives which the auxiliary propulsion system is designed to achieve. Thus the reliability model computes the probability of successfully accomplishing each of the desired orbit corrections. To accomplish this, the reliability model interfaces with a computer code that models the performance of a blowdown (unregulated) monopropellant auxiliary propulsion system. The computer code acts as a performance model and as such gives an accurate time history of the system operating parameters. The basic timing and status information is passed on to and utilized by the reliability model which establishes the probability of successfully accomplishing the orbit corrections.
An Illustration of Generalised Arma (garma) Time Series Modeling of Forest Area in Malaysia
NASA Astrophysics Data System (ADS)
Pillai, Thulasyammal Ramiah; Shitan, Mahendran
Forestry is the art and science of managing forests, tree plantations, and related natural resources. The main goal of forestry is to create and implement systems that allow forests to continue a sustainable provision of environmental supplies and services. Forest area is land under natural or planted stands of trees, whether productive or not. Forest area of Malaysia has been observed over the years and it can be modeled using time series models. A new class of GARMA models have been introduced in the time series literature to reveal some hidden features in time series data. For these models to be used widely in practice, we illustrate the fitting of GARMA (1, 1; 1, δ) model to the Annual Forest Area data of Malaysia which has been observed from 1987 to 2008. The estimation of the model was done using Hannan-Rissanen Algorithm, Whittle's Estimation and Maximum Likelihood Estimation.
Application of a time-magnitude prediction model for earthquakes
NASA Astrophysics Data System (ADS)
An, Weiping; Jin, Xueshen; Yang, Jialiang; Dong, Peng; Zhao, Jun; Zhang, He
2007-06-01
In this paper we discuss the physical meaning of the magnitude-time model parameters for earthquake prediction. The gestation process for strong earthquake in all eleven seismic zones in China can be described by the magnitude-time prediction model using the computations of the parameters of the model. The average model parameter values for China are: b = 0.383, c=0.154, d = 0.035, B = 0.844, C = -0.209, and D = 0.188. The robustness of the model parameters is estimated from the variation in the minimum magnitude of the transformed data, the spatial extent, and the temporal period. Analysis of the spatial and temporal suitability of the model indicates that the computation unit size should be at least 4° × 4° for seismic zones in North China, at least 3° × 3° in Southwest and Northwest China, and the time period should be as long as possible.
Modelling accelerated degradation data using Wiener diffusion with a time scale transformation.
Whitmore, G A; Schenkelberg, F
1997-01-01
Engineering degradation tests allow industry to assess the potential life span of long-life products that do not fail readily under accelerated conditions in life tests. A general statistical model is presented here for performance degradation of an item of equipment. The degradation process in the model is taken to be a Wiener diffusion process with a time scale transformation. The model incorporates Arrhenius extrapolation for high stress testing. The lifetime of an item is defined as the time until performance deteriorates to a specified failure threshold. The model can be used to predict the lifetime of an item or the extent of degradation of an item at a specified future time. Inference methods for the model parameters, based on accelerated degradation test data, are presented. The model and inference methods are illustrated with a case application involving self-regulating heating cables. The paper also discusses a number of practical issues encountered in applications.
Modeling and Simulation of Bus Dispatching Policy for Timed Transfers on Signalized Networks
NASA Astrophysics Data System (ADS)
Cho, Hsun-Jung; Lin, Guey-Shii
2007-12-01
The major work of this study is to formulate the system cost functions and to integrate the bus dispatching policy with signal control. The integrated model mainly includes the flow dispersion model for links, signal control model for nodes, and dispatching control model for transfer terminals. All such models are inter-related for transfer operations in one-center transit network. The integrated model that combines dispatching policies with flexible signal control modes can be applied to assess the effectiveness of transfer operations. It is found that, if bus arrival information is reliable, an early dispatching decision made at the mean bus arrival times is preferable. The costs for coordinated operations with slack times are relatively low at the optimal common headway when applying adaptive route control. Based on such findings, a threshold function of bus headway for justifying an adaptive signal route control under various time values of auto drivers is developed.
Analysis of an inventory model for both linearly decreasing demand and holding cost
NASA Astrophysics Data System (ADS)
Malik, A. K.; Singh, Parth Raj; Tomar, Ajay; Kumar, Satish; Yadav, S. K.
2016-03-01
This study proposes the analysis of an inventory model for linearly decreasing demand and holding cost for non-instantaneous deteriorating items. The inventory model focuses on commodities having linearly decreasing demand without shortages. The holding cost doesn't remain uniform with time due to any form of variation in the time value of money. Here we consider that the holding cost decreases with respect to time. The optimal time interval for the total profit and the optimal order quantity are determined. The developed inventory model is pointed up through a numerical example. It also includes the sensitivity analysis.
The Behavioral Economics of Choice and Interval Timing
ERIC Educational Resources Information Center
Jozefowiez, J.; Staddon, J. E. R.; Cerutti, D. T.
2009-01-01
The authors propose a simple behavioral economic model (BEM) describing how reinforcement and interval timing interact. The model assumes a Weber-law-compliant logarithmic representation of time. Associated with each represented time value are the payoffs that have been obtained for each possible response. At a given real time, the response with…
Menke, Jan
2009-11-01
This study analyses the relation between image quality and contrast kinetics in bolus-timed carotid magnetic resonance angiography (MRA) and interprets the findings by Fourier-based numerical modelling. One hundred patients prone to carotid stenosis were studied using contrast-enhanced carotid MRA with bolus timing. The carotid MRAs were timed to start relatively early without accounting for the injection time of the contrast medium. For interpretation different starting times were modelled, utilising the spectral information of the test bolus series. In the test bolus series the arterial time-to-peak showed a large 95% confidence interval of 12-27 s, indicating the need for individual MRA timing. All bolus-timed MRAs were of good diagnostic quality. The mean (+/-SD) arterial contrast-to-noise ratio was 53.0 (+/-12.8) and thus high, and 95% of the MRAs showed a slight venous contamination of 11.8% or less (median 5.6%). According to the Fourier-based modelling the central k-space may be acquired about 2 s before the arterial contrast peak. This results in carotid MRAs with sufficiently high arterial enhancement and little venous contamination. In conclusion, in bolus-timed carotid MRA a relatively short timing provides good arterial contrast with little venous contamination, which can be explained by Fourier-based numerical modelling of the contrast kinetics.
Continuous-Time Finance and the Waiting Time Distribution: Multiple Characteristic Times
NASA Astrophysics Data System (ADS)
Fa, Kwok Sau
2012-09-01
In this paper, we model the tick-by-tick dynamics of markets by using the continuous-time random walk (CTRW) model. We employ a sum of products of power law and stretched exponential functions for the waiting time probability distribution function; this function can fit well the waiting time distribution for BUND futures traded at LIFFE in 1997.
14 CFR 91.1505 - Repairs assessment for pressurized fuselages.
Code of Federal Regulations, 2011 CFR
2011-01-01
... A300 (excluding the -600 series), the flight cycle implementation time is: (i) Model B2: 36,000 flights... operate an Airbus Model A300 (excluding the -600 series), British Aerospace Model BAC 1-11, Boeing Model... Lockheed Model L-1011 airplane beyond applicable flight cycle implementation time specified below, or May...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tom Elicson; Bentley Harwood; Jim Bouchard
Over a 12 month period, a fire PRA was developed for a DOE facility using the NUREG/CR-6850 EPRI/NRC fire PRA methodology. The fire PRA modeling included calculation of fire severity factors (SFs) and fire non-suppression probabilities (PNS) for each safe shutdown (SSD) component considered in the fire PRA model. The SFs were developed by performing detailed fire modeling through a combination of CFAST fire zone model calculations and Latin Hypercube Sampling (LHS). Component damage times and automatic fire suppression system actuation times calculated in the CFAST LHS analyses were then input to a time-dependent model of fire non-suppression probability. Themore » fire non-suppression probability model is based on the modeling approach outlined in NUREG/CR-6850 and is supplemented with plant specific data. This paper presents the methodology used in the DOE facility fire PRA for modeling fire-induced SSD component failures and includes discussions of modeling techniques for: • Development of time-dependent fire heat release rate profiles (required as input to CFAST), • Calculation of fire severity factors based on CFAST detailed fire modeling, and • Calculation of fire non-suppression probabilities.« less
Deriving Tools from Real-time Runs: A New CCMC Support for SEC and AFWA
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2008-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions. the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. The second focus of CCMC activities is on validation and verification of space weather models. and on the transition of appropriate models to space weather forecast centers. As part of the latter activity. the CCMC develops real-time simulation systems that stress models through routine execution. A by-product of these real-time calculations is the ability to derive model products, which may be useful for space weather operators. After consultations with NOA/SEC and with AFWA, CCMC has developed a set of tools as a first step to make real-time model output useful to forecast centers. In this presentation, we will discuss the motivation for this activity, the actions taken so far, and options for future tools from model output.
The Priority Inversion Problem and Real-Time Symbolic Model Checking
1993-04-23
real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties
Experience of Time Passage:. Phenomenology, Psychophysics, and Biophysical Modelling
NASA Astrophysics Data System (ADS)
Wackermann, Jiří
2005-10-01
The experience of time's passing appears, from the 1st person perspective, to be a primordial subjective experience, seemingly inaccessible to the 3rd person accounts of time perception (psychophysics, cognitive psychology). In our analysis of the `dual klepsydra' model of reproduction of temporal durations, time passage occurs as a cognitive construct, based upon more elementary (`proto-cognitive') function of the psychophysical organism. This conclusion contradicts the common concepts of `subjective' or `psychological' time as readings of an `internal clock'. Our study shows how phenomenological, experimental and modelling approaches can be fruitfully combined.
A model for discriminating reinforcers in time and space.
Cowie, Sarah; Davison, Michael; Elliffe, Douglas
2016-06-01
Both the response-reinforcer and stimulus-reinforcer relation are important in discrimination learning; differential responding requires a minimum of two discriminably-different stimuli and two discriminably-different associated contingencies of reinforcement. When elapsed time is a discriminative stimulus for the likely availability of a reinforcer, choice over time may be modeled by an extension of the Davison and Nevin (1999) model that assumes that local choice strictly matches the effective local reinforcer ratio. The effective local reinforcer ratio may differ from the obtained local reinforcer ratio for two reasons: Because the animal inaccurately estimates times associated with obtained reinforcers, and thus incorrectly discriminates the stimulus-reinforcer relation across time; and because of error in discriminating the response-reinforcer relation. In choice-based timing tasks, the two responses are usually highly discriminable, and so the larger contributor to differences between the effective and obtained reinforcer ratio is error in discriminating the stimulus-reinforcer relation. Such error may be modeled either by redistributing the numbers of reinforcers obtained at each time across surrounding times, or by redistributing the ratio of reinforcers obtained at each time in the same way. We assessed the extent to which these two approaches to modeling discrimination of the stimulus-reinforcer relation could account for choice in a range of temporal-discrimination procedures. The version of the model that redistributed numbers of reinforcers accounted for more variance in the data. Further, this version provides an explanation for shifts in the point of subjective equality that occur as a result of changes in the local reinforcer rate. The inclusion of a parameter reflecting error in discriminating the response-reinforcer relation enhanced the ability of each version of the model to describe data. The ability of this class of model to account for a range of data suggests that timing, like other conditional discriminations, is choice under the joint discriminative control of elapsed time and differential reinforcement. Understanding the role of differential reinforcement is therefore critical to understanding control by elapsed time. Copyright © 2016 Elsevier B.V. All rights reserved.
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.
2009-01-01
In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.
NASA Technical Reports Server (NTRS)
Rosenstein, H.; Mcveigh, M. A.; Mollenkof, P. A.
1973-01-01
A mathematical model for a real time simulation of a tilt rotor aircraft was developed. The mathematical model is used for evaluating aircraft performance and handling qualities. The model is based on an eleven degree of freedom total force representation. The rotor is treated as a point source of forces and moments with appropriate response time lags and actuator dynamics. The aerodynamics of the wing, tail, rotors, landing gear, and fuselage are included.
The time series approach to short term load forecasting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagan, M.T.; Behr, S.M.
The application of time series analysis methods to load forecasting is reviewed. It is shown than Box and Jenkins time series models, in particular, are well suited to this application. The logical and organized procedures for model development using the autocorrelation function make these models particularly attractive. One of the drawbacks of these models is the inability to accurately represent the nonlinear relationship between load and temperature. A simple procedure for overcoming this difficulty is introduced, and several Box and Jenkins models are compared with a forecasting procedure currently used by a utility company.
Murray, A; Montgomery, J E; Chang, H; Rogers, W H; Inui, T; Safran, D G
2001-07-01
To examine the differences in physician satisfaction associated with open- versus closed-model practice settings and to evaluate changes in physician satisfaction between 1986 and 1997. Open-model practices refer to those in which physicians accept patients from multiple health plans and insurers (i.e., do not have an exclusive arrangement with any single health plan). Closed-model practices refer to those wherein physicians have an exclusive relationship with a single health plan (i.e., staff- or group-model HMO). Two cross-sectional surveys of physicians; one conducted in 1986 (Medical Outcomes Study) and one conducted in 1997 (Study of Primary Care Performance in Massachusetts). Primary care practices in Massachusetts. General internists and family practitioners in Massachusetts. Seven measures of physician satisfaction, including satisfaction with quality of care, the potential to achieve professional goals, time spent with individual patients, total earnings from practice, degree of personal autonomy, leisure time, and incentives for high quality. Physicians in open- versus closed-model practices differed significantly in several aspects of their professional satisfaction. In 1997, open-model physicians were less satisfied than closed-model physicians with their total earnings, leisure time, and incentives for high quality. Open-model physicians reported significantly more difficulty with authorization procedures and reported more denials for care. Overall, physicians in 1997 were less satisfied in every aspect of their professional life than 1986 physicians. Differences were significant in three areas: time spent with individual patients, autonomy, and leisure time (P < or =.05). Among open-model physicians, satisfaction with autonomy and time with individual patients were significantly lower in 1997 than 1986 (P < or =.01). Among closed-model physicians, satisfaction with total earnings and with potential to achieve professional goals were significantly lower in 1997 than in 1986 (P < or =.01). This study finds that the state of physician satisfaction in Massachusetts is extremely low, with the majority of physicians dissatisfied with the amount of time they have with individual patients, their leisure time, and their incentives for high quality. Satisfaction with most areas of practice declined significantly between 1986 and 1997. Open-model physicians were less satisfied than closed-model physicians in most aspects of practices.
Murray, Alison; Montgomery, Jana E; Chang, Hong; Rogers, William H; Inui, Thomas; Safran, Dana Gelb
2001-01-01
OBJECTIVE To examine the differences in physician satisfaction associated with open- versus closed-model practice settings and to evaluate changes in physician satisfaction between 1986 and 1997. Open-model practices refer to those in which physicians accept patients from multiple health plans and insurers (i.e., do not have an exclusive arrangement with any single health plan). Closed-model practices refer to those wherein physicians have an exclusive relationship with a single health plan (i.e., staff- or group-model HMO). DESIGN Two cross-sectional surveys of physicians; one conducted in 1986 (Medical Outcomes Study) and one conducted in 1997 (Study of Primary Care Performance in Massachusetts). SETTING Primary care practices in Massachusetts. PARTICIPANTS General internists and family practitioners in Massachusetts. MEASUREMENTS Seven measures of physician satisfaction, including satisfaction with quality of care, the potential to achieve professional goals, time spent with individual patients, total earnings from practice, degree of personal autonomy, leisure time, and incentives for high quality. RESULTS Physicians in open- versus closed-model practices differed significantly in several aspects of their professional satisfaction. In 1997, open-model physicians were less satisfied than closed-model physicians with their total earnings, leisure time, and incentives for high quality. Open-model physicians reported significantly more difficulty with authorization procedures and reported more denials for care. Overall, physicians in 1997 were less satisfied in every aspect of their professional life than 1986 physicians. Differences were significant in three areas: time spent with individual patients, autonomy, and leisure time (P ≤ .05). Among open-model physicians, satisfaction with autonomy and time with individual patients were significantly lower in 1997 than 1986 (P ≤ .01). Among closed-model physicians, satisfaction with total earnings and with potential to achieve professional goals were significantly lower in 1997 than in 1986 (P ≤ .01). CONCLUSIONS This study finds that the state of physician satisfaction in Massachusetts is extremely low, with the majority of physicians dissatisfied with the amount of time they have with individual patients, their leisure time, and their incentives for high quality. Satisfaction with most areas of practice declined significantly between 1986 and 1997. Open-model physicians were less satisfied than closed-model physicians in most aspects of practices.
NASA Astrophysics Data System (ADS)
Begnaud, M. L.; Anderson, D. N.; Phillips, W. S.; Myers, S. C.; Ballard, S.
2016-12-01
The Regional Seismic Travel Time (RSTT) tomography model has been developed to improve travel time predictions for regional phases (Pn, Sn, Pg, Lg) in order to increase seismic location accuracy, especially for explosion monitoring. The RSTT model is specifically designed to exploit regional phases for location, especially when combined with teleseismic arrivals. The latest RSTT model (version 201404um) has been released (http://www.sandia.gov/rstt). Travel time uncertainty estimates for RSTT are determined using one-dimensional (1D), distance-dependent error models, that have the benefit of being very fast to use in standard location algorithms, but do not account for path-dependent variations in error, and structural inadequacy of the RSTTT model (e.g., model error). Although global in extent, the RSTT tomography model is only defined in areas where data exist. A simple 1D error model does not accurately model areas where RSTT has not been calibrated. We are developing and validating a new error model for RSTT phase arrivals by mathematically deriving this multivariate model directly from a unified model of RSTT embedded into a statistical random effects model that captures distance, path and model error effects. An initial method developed is a two-dimensional path-distributed method using residuals. The goals for any RSTT uncertainty method are for it to be both readily useful for the standard RSTT user as well as improve travel time uncertainty estimates for location. We have successfully tested using the new error model for Pn phases and will demonstrate the method and validation of the error model for Sn, Pg, and Lg phases.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
A bootstrap based space-time surveillance model with an application to crime occurrences
NASA Astrophysics Data System (ADS)
Kim, Youngho; O'Kelly, Morton
2008-06-01
This study proposes a bootstrap-based space-time surveillance model. Designed to find emerging hotspots in near-real time, the bootstrap based model is characterized by its use of past occurrence information and bootstrap permutations. Many existing space-time surveillance methods, using population at risk data to generate expected values, have resulting hotspots bounded by administrative area units and are of limited use for near-real time applications because of the population data needed. However, this study generates expected values for local hotspots from past occurrences rather than population at risk. Also, bootstrap permutations of previous occurrences are used for significant tests. Consequently, the bootstrap-based model, without the requirement of population at risk data, (1) is free from administrative area restriction, (2) enables more frequent surveillance for continuously updated registry database, and (3) is readily applicable to criminology and epidemiology surveillance. The bootstrap-based model performs better for space-time surveillance than the space-time scan statistic. This is shown by means of simulations and an application to residential crime occurrences in Columbus, OH, year 2000.
NASA Astrophysics Data System (ADS)
Cheng-Wu, Li; Hong-Lai, Xue; Cheng, Guan; Wen-biao, Liu
2018-04-01
Statistical analysis shows that in the coal matrix, the diffusion coefficient for methane is time-varying, and its integral satisfies the formula μt κ /(1 + β κ ). Therefore, a so-called dynamic diffusion coefficient model (DDC model) is developed. To verify the suitability and accuracy of the DDC model, a series of gas diffusion experiments were conducted using coal particles of different sizes. The results show that the experimental data can be accurately described by the DDC and bidisperse models, but the fit to the DDC model is slightly better. For all coal samples, as time increases, the effective diffusion coefficient first shows a sudden drop, followed by a gradual decrease before stabilizing at longer times. The effective diffusion coefficient has a negative relationship with the size of the coal particle. Finally, the relationship between the constants of the DDC model and the effective diffusion coefficient is discussed. The constant α (μ/R 2 ) denotes the effective coefficient at the initial time, and the constants κ and β control the attenuation characteristic of the effective diffusion coefficient.
Weiss, Michael
2017-06-01
Appropriate model selection is important in fitting oral concentration-time data due to the complex character of the absorption process. When IV reference data are available, the problem is the selection of an empirical input function (absorption model). In the present examples a weighted sum of inverse Gaussian density functions (IG) was found most useful. It is shown that alternative models (gamma and Weibull density) are only valid if the input function is log-concave. Furthermore, it is demonstrated for the first time that the sum of IGs model can be also applied to fit oral data directly (without IV data). In the present examples, a weighted sum of two or three IGs was sufficient. From the parameters of this function, the model-independent measures AUC and mean residence time can be calculated. It turned out that a good fit of the data in the terminal phase is essential to avoid parameter biased estimates. The time course of fractional elimination rate and the concept of log-concavity have proved as useful tools in model selection.
Marchioro, C A; Krechemer, F S; de Moraes, C P; Foerster, L A
2015-12-01
The diamondback moth, Plutella xylostella (L.), is a cosmopolitan pest of brassicaceous crops occurring in regions with highly distinct climate conditions. Several studies have investigated the relationship between temperature and P. xylostella development rate, providing degree-day models for populations from different geographical regions. However, there are no data available to date to demonstrate the suitability of such models to make reliable projections on the development time for this species in field conditions. In the present study, 19 models available in the literature were tested regarding their ability to accurately predict the development time of two cohorts of P. xylostella under field conditions. Only 11 out of the 19 models tested accurately predicted the development time for the first cohort of P. xylostella, but only seven for the second cohort. Five models correctly predicted the development time for both cohorts evaluated. Our data demonstrate that the accuracy of the models available for P. xylostella varies widely and therefore should be used with caution for pest management purposes.
NASA Astrophysics Data System (ADS)
Brigatti, E.; Vieira, M. V.; Kajin, M.; Almeida, P. J. A. L.; de Menezes, M. A.; Cerqueira, R.
2016-02-01
We study the population size time series of a Neotropical small mammal with the intent of detecting and modelling population regulation processes generated by density-dependent factors and their possible delayed effects. The application of analysis tools based on principles of statistical generality are nowadays a common practice for describing these phenomena, but, in general, they are more capable of generating clear diagnosis rather than granting valuable modelling. For this reason, in our approach, we detect the principal temporal structures on the bases of different correlation measures, and from these results we build an ad-hoc minimalist autoregressive model that incorporates the main drivers of the dynamics. Surprisingly our model is capable of reproducing very well the time patterns of the empirical series and, for the first time, clearly outlines the importance of the time of attaining sexual maturity as a central temporal scale for the dynamics of this species. In fact, an important advantage of this analysis scheme is that all the model parameters are directly biologically interpretable and potentially measurable, allowing a consistency check between model outputs and independent measurements.
Gupta, Nidhi; Christiansen, Caroline Stordal; Hanisch, Christiana; Bay, Hans; Burr, Hermann; Holtermann, Andreas
2017-01-16
To investigate the differences between a questionnaire-based and accelerometer-based sitting time, and develop a model for improving the accuracy of questionnaire-based sitting time for predicting accelerometer-based sitting time. 183 workers in a cross-sectional study reported sitting time per day using a single question during the measurement period, and wore 2 Actigraph GT3X+ accelerometers on the thigh and trunk for 1-4 working days to determine their actual sitting time per day using the validated Acti4 software. Least squares regression models were fitted with questionnaire-based siting time and other self-reported predictors to predict accelerometer-based sitting time. Questionnaire-based and accelerometer-based average sitting times were ≈272 and ≈476 min/day, respectively. A low Pearson correlation (r=0.32), high mean bias (204.1 min) and wide limits of agreement (549.8 to -139.7 min) between questionnaire-based and accelerometer-based sitting time were found. The prediction model based on questionnaire-based sitting explained 10% of the variance in accelerometer-based sitting time. Inclusion of 9 self-reported predictors in the model increased the explained variance to 41%, with 10% optimism using a resampling bootstrap validation. Based on a split validation analysis, the developed prediction model on ≈75% of the workers (n=132) reduced the mean and the SD of the difference between questionnaire-based and accelerometer-based sitting time by 64% and 42%, respectively, in the remaining 25% of the workers. This study indicates that questionnaire-based sitting time has low validity and that a prediction model can be one solution to materially improve the precision of questionnaire-based sitting time. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Psychophysics of time perception and intertemporal choice models
NASA Astrophysics Data System (ADS)
Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.
2008-03-01
Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.
NASA Astrophysics Data System (ADS)
Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit
2017-07-01
In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.
Zhang, Pengfei; Zhang, Rui; Liu, Jinhai; Lu, Xiaochun
2018-01-01
This study proposes two models for precise time transfer using the BeiDou Navigation Satellite System triple-frequency signals: ionosphere-free (IF) combined precise point positioning (PPP) model with two dual-frequency combinations (IF-PPP1) and ionosphere-free combined PPP model with a single triple-frequency combination (IF-PPP2). A dataset with a short baseline (with a common external time frequency) and a long baseline are used for performance assessments. The results show that IF-PPP1 and IF-PPP2 models can both be used for precise time transfer using BeiDou Navigation Satellite System (BDS) triple-frequency signals, and the accuracy and stability of time transfer is the same in both cases, except for a constant system bias caused by the hardware delay of different frequencies, which can be removed by the parameter estimation and prediction with long time datasets or by a priori calibration. PMID:29596330
Radiation and polarization signatures of the 3D multizone time-dependent hadronic blazar model
Zhang, Haocheng; Diltz, Chris; Bottcher, Markus
2016-09-23
We present a newly developed time-dependent three-dimensional multizone hadronic blazar emission model. By coupling a Fokker–Planck-based lepto-hadronic particle evolution code, 3DHad, with a polarization-dependent radiation transfer code, 3DPol, we are able to study the time-dependent radiation and polarization signatures of a hadronic blazar model for the first time. Our current code is limited to parameter regimes in which the hadronic γ-ray output is dominated by proton synchrotron emission, neglecting pion production. Our results demonstrate that the time-dependent flux and polarization signatures are generally dominated by the relation between the synchrotron cooling and the light-crossing timescale, which is largely independent ofmore » the exact model parameters. We find that unlike the low-energy polarization signatures, which can vary rapidly in time, the high-energy polarization signatures appear stable. Lastly, future high-energy polarimeters may be able to distinguish such signatures from the lower and more rapidly variable polarization signatures expected in leptonic models.« less
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.; Leban, F.; Mashiko, S.
1986-01-01
Single-channel pilot manual control output in closed-tracking tasks is modeled in terms of linear discrete transfer functions which are parsimonious and guaranteed stable. The transfer functions are found by applying a modified super-position time series generation technique. A Levinson-Durbin algorithm is used to determine the filter which prewhitens the input and a projective (least squares) fit of pulse response estimates is used to guarantee identified model stability. Results from two case studies are compared to previous findings, where the source of data are relatively short data records, approximately 25 seconds long. Time delay effects and pilot seasonalities are discussed and analyzed. It is concluded that single-channel time series controller modeling is feasible on short records, and that it is important for the analyst to determine a criterion for best time domain fit which allows association of model parameter values, such as pure time delay, with actual physical and physiological constraints. The purpose of the modeling is thus paramount.
NASA Astrophysics Data System (ADS)
Cannata, Massimiliano; Neumann, Jakob; Cardoso, Mirko; Rossetto, Rudy; Foglia, Laura; Borsi, Iacopo
2017-04-01
In situ time-series are an important aspect of environmental modelling, especially with the advancement of numerical simulation techniques and increased model complexity. In order to make use of the increasing data available through the requirements of the EU Water Framework Directive, the FREEWAT GIS environment incorporates the newly developed Observation Analysis Tool for time-series analysis. The tool is used to import time-series data into QGIS from local CSV files, online sensors using the istSOS service, or MODFLOW model result files and enables visualisation, pre-processing of data for model development, and post-processing of model results. OAT can be used as a pre-processor for calibration observations, integrating the creation of observations for calibration directly from sensor time-series. The tool consists in an expandable Python library of processing methods and an interface integrated in the QGIS FREEWAT plug-in which includes a large number of modelling capabilities, data management tools and calibration capacity.
Space-for-Time Substitution Works in Everglades Ecological Forecasting Models
Banet, Amanda I.; Trexler, Joel C.
2013-01-01
Space-for-time substitution is often used in predictive models because long-term time-series data are not available. Critics of this method suggest factors other than the target driver may affect ecosystem response and could vary spatially, producing misleading results. Monitoring data from the Florida Everglades were used to test whether spatial data can be substituted for temporal data in forecasting models. Spatial models that predicted bluefin killifish (Lucania goodei) population response to a drying event performed comparably and sometimes better than temporal models. Models worked best when results were not extrapolated beyond the range of variation encompassed by the original dataset. These results were compared to other studies to determine whether ecosystem features influence whether space-for-time substitution is feasible. Taken in the context of other studies, these results suggest space-for-time substitution may work best in ecosystems with low beta-diversity, high connectivity between sites, and small lag in organismal response to the driver variable. PMID:24278368
Ellouze, M; Pichaud, M; Bonaiti, C; Coroller, L; Couvert, O; Thuault, D; Vaillant, R
2008-11-30
Time temperature integrators or indicators (TTIs) are effective tools making the continuous monitoring of the time temperature history of chilled products possible throughout the cold chain. Their correct setting is of critical importance to ensure food quality. The objective of this study was to develop a model to facilitate accurate settings of the CRYOLOG biological TTI, TRACEO. Experimental designs were used to investigate and model the effects of the temperature, the TTI inoculum size, pH, and water activity on its response time. The modelling process went through several steps addressing growth, acidification and inhibition phenomena in dynamic conditions. The model showed satisfactory results and validations in industrial conditions gave clear evidence that such a model is a valuable tool, not only to predict accurate response times of TRACEO, but also to propose precise settings to manufacture the appropriate TTI to trace a particular food according to a given time temperature scenario.
Efficient Learning of Continuous-Time Hidden Markov Models for Disease Progression
Liu, Yu-Ying; Li, Shuang; Li, Fuxin; Song, Le; Rehg, James M.
2016-01-01
The Continuous-Time Hidden Markov Model (CT-HMM) is an attractive approach to modeling disease progression due to its ability to describe noisy observations arriving irregularly in time. However, the lack of an efficient parameter learning algorithm for CT-HMM restricts its use to very small models or requires unrealistic constraints on the state transitions. In this paper, we present the first complete characterization of efficient EM-based learning methods for CT-HMM models. We demonstrate that the learning problem consists of two challenges: the estimation of posterior state probabilities and the computation of end-state conditioned statistics. We solve the first challenge by reformulating the estimation problem in terms of an equivalent discrete time-inhomogeneous hidden Markov model. The second challenge is addressed by adapting three approaches from the continuous time Markov chain literature to the CT-HMM domain. We demonstrate the use of CT-HMMs with more than 100 states to visualize and predict disease progression using a glaucoma dataset and an Alzheimer’s disease dataset. PMID:27019571
"Just-in-time" clinical information.
Chueh, H; Barnett, G O
1997-06-01
The just-in-time (JIT) model originated in the manufacturing industry as a way to manage parts inventories process so that specific components could be made available at the appropriate times (that is, "just in time"). This JIT model can be applied to the management of clinical information inventories, so that clinicians can have more immediate access to the most current and relevant information at the time they most need it--when making clinical care decisions. The authors discuss traditional modes of managing clinical information, and then describe how a new, JIT model may be developed and implemented. They describe three modes of clinician-information interactions that a JIT model might employ, the scope of information that may be made available in a JIT model (global information or local, case-specific information), and the challenges posed by the implementation of such an information-access model. Finally, they discuss how JIT information access may change how physicians practice medicine, various ways JIT information may be delivered, and concerns about the trustworthiness of electronically published and accessed information resources.
A real-time ionospheric model based on GNSS Precise Point Positioning
NASA Astrophysics Data System (ADS)
Tu, Rui; Zhang, Hongping; Ge, Maorong; Huang, Guanwen
2013-09-01
This paper proposes a method of real-time monitoring and modeling the ionospheric Total Electron Content (TEC) by Precise Point Positioning (PPP). Firstly, the ionospheric TEC and receiver’s Differential Code Biases (DCB) are estimated with the undifferenced raw observation in real-time, then the ionospheric TEC model is established based on the Single Layer Model (SLM) assumption and the recovered ionospheric TEC. In this study, phase observations with high precision are directly used instead of phase smoothed code observations. In addition, the DCB estimation is separated from the establishment of the ionospheric model which will limit the impacts of the SLM assumption impacts. The ionospheric model is established at every epoch for real time application. The method is validated with three different GNSS networks on a local, regional, and global basis. The results show that the method is feasible and effective, the real-time ionosphere and DCB results are very consistent with the IGS final products, with a bias of 1-2 TECU and 0.4 ns respectively.
Nonequilibrium dynamics of the O( N ) model on dS3 and AdS crunches
NASA Astrophysics Data System (ADS)
Kumar, S. Prem; Vaganov, Vladislav
2018-03-01
We study the nonperturbative quantum evolution of the interacting O( N ) vector model at large- N , formulated on a spatial two-sphere, with time dependent couplings which diverge at finite time. This model - the so-called "E-frame" theory, is related via a conformal transformation to the interacting O( N ) model in three dimensional global de Sitter spacetime with time independent couplings. We show that with a purely quartic, relevant deformation the quantum evolution of the E-frame model is regular even when the classical theory is rendered singular at the end of time by the diverging coupling. Time evolution drives the E-frame theory to the large- N Wilson-Fisher fixed point when the classical coupling diverges. We study the quantum evolution numerically for a variety of initial conditions and demonstrate the finiteness of the energy at the classical "end of time". With an additional (time dependent) mass deformation, quantum backreaction lowers the mass, with a putative smooth time evolution only possible in the limit of infinite quartic coupling. We discuss the relevance of these results for the resolution of crunch singularities in AdS geometries dual to E-frame theories with a classical gravity dual.
Van Aller, Carla; Lara, Jose; Stephan, Blossom C M; Donini, Lorenzo Maria; Heymsfield, Steven; Katzmarzyk, Peter T; Wells, Jonathan C K; Prado, Carla M; Siervo, Mario
2018-02-15
There is no consensus on the definition of sarcopenic obesity (SO), resulting in inconsistent associations of SO with mortality risk. We aim to evaluate association of dual energy x-ray absorptiometry (DXA) SO models with mortality risk in a US adult population (≥50 years). The study population consisted of 3577 participants aged 50 years and older from the 1999-2004 National Health and Nutrition and Examination Survey with mortality follow-up data through December 31, 2011. Difference in survival time in people with and without SO defined by three body composition DXA models (Model 1: body composition phenotype model; Model 2: Truncal Fat Mass (TrFM)/Appendicular Skeletal Muscle Mass (ASM) ratio model; Model 3: Fat Mass (FM)/Fat Free Mass (FFM) ratio). The differences between the models were assessed by the acceleration failure time model, and expressed as time ratios (TR). Participants age 50-70 years with SO had a significantly decreased survival time, according to the body composition phenotype model (TR: 0.92; 95% CI: 0.87-0.97), and TrFM/ASM ratio model (TR: 0.88; 95% CI: 0.81-0.95). The FM/FFM ratio model did not detect significant differences in survival time. Participants with SO aged 70 years and older did not have a significantly decreased survival time, according to all three models. A SO phenotype increases mortality risk in people of age 50-70 years, but not in people aged 70 years and older. The application of the body composition phenotype and the TrFM/ASM ratio models may represent useful diagnostic approaches to improve the prediction of disease and mortality risk. Copyright © 2018 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.
A Two-Time Scale Decentralized Model Predictive Controller Based on Input and Output Model
Niu, Jian; Zhao, Jun; Xu, Zuhua; Qian, Jixin
2009-01-01
A decentralized model predictive controller applicable for some systems which exhibit different dynamic characteristics in different channels was presented in this paper. These systems can be regarded as combinations of a fast model and a slow model, the response speeds of which are in two-time scale. Because most practical models used for control are obtained in the form of transfer function matrix by plant tests, a singular perturbation method was firstly used to separate the original transfer function matrix into two models in two-time scale. Then a decentralized model predictive controller was designed based on the two models derived from the original system. And the stability of the control method was proved. Simulations showed that the method was effective. PMID:19834542
NASA Technical Reports Server (NTRS)
Lee, S. S.; Nwadike, E. V.; Sinha, S. E.
1982-01-01
The theory of a three dimensional (3-D) mathematical thermal discharge model and a related one dimensional (1-D) model are described. Model verification at two sites, a separate user's manual for each model are included. The 3-D model has two forms: free surface and rigid lid. The former allows a free air/water interface and is suited for significant surface wave heights compared to mean water depth, estuaries and coastal regions. The latter is suited for small surface wave heights compared to depth because surface elevation was removed as a parameter. These models allow computation of time dependent velocity and temperature fields for given initial conditions and time-varying boundary conditions. The free surface model also provides surface height variations with time.
Indoor Residence Times of Semivolatile Organic Compounds: Model Estimation and Field Evaluation
Indoor residence times of semivolatile organic compounds (SVOCs) are a major and mostly unavailable input for residential exposure assessment. We calculated residence times for a suite of SVOCs using a fugacity model applied to residential environments. Residence times depend on...
Positive Outcomes Increase over Time with the Implementation of a Semiflipped Teaching Model
ERIC Educational Resources Information Center
Gorres-Martens, Brittany K.; Segovia, Angela R.; Pfefer, Mark T.
2016-01-01
The flipped teaching model can engage students in the learning process and improve learning outcomes. The purpose of the present study was to assess the outcomes of a semiflipped teaching model over time. Neurophysiology students spent the majority of class time listening to traditional didactic lectures, but they also listened to 5 online…
A Semiparametric Model for Jointly Analyzing Response Times and Accuracy in Computerized Testing
ERIC Educational Resources Information Center
Wang, Chun; Fan, Zhewen; Chang, Hua-Hua; Douglas, Jeffrey A.
2013-01-01
The item response times (RTs) collected from computerized testing represent an underutilized type of information about items and examinees. In addition to knowing the examinees' responses to each item, we can investigate the amount of time examinees spend on each item. Current models for RTs mainly focus on parametric models, which have the…
Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy
ERIC Educational Resources Information Center
Maris, Gunter; van der Maas, Han
2012-01-01
Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…
Considerations of the Use of 3-D Geophysical Models to Predict Test Ban Monitoring Observables
2007-09-01
predict first P arrival times. Since this is a 3-D model, the travel times are predicted with a 3-D finite-difference code solving the eikonal equations...for the eikonal wave equation should provide more accurate predictions of travel-time from 3D models. These techniques and others are being
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Time-dependent efficacy of longitudinal biomarker for clinical endpoint.
Kolamunnage-Dona, Ruwanthi; Williamson, Paula R
2018-06-01
Joint modelling of longitudinal biomarker and event-time processes has gained its popularity in recent years as they yield more accurate and precise estimates. Considering this modelling framework, a new methodology for evaluating the time-dependent efficacy of a longitudinal biomarker for clinical endpoint is proposed in this article. In particular, the proposed model assesses how well longitudinally repeated measurements of a biomarker over various time periods (0,t) distinguish between individuals who developed the disease by time t and individuals who remain disease-free beyond time t. The receiver operating characteristic curve is used to provide the corresponding efficacy summaries at various t based on the association between longitudinal biomarker trajectory and risk of clinical endpoint prior to each time point. The model also allows detecting the time period over which a biomarker should be monitored for its best discriminatory value. The proposed approach is evaluated through simulation and illustrated on the motivating dataset from a prospective observational study of biomarkers to diagnose the onset of sepsis.
NASA Astrophysics Data System (ADS)
Pohle, Ina; Niebisch, Michael; Müller, Hannes; Schümberg, Sabine; Zha, Tingting; Maurer, Thomas; Hinz, Christoph
2018-07-01
To simulate the impacts of within-storm rainfall variabilities on fast hydrological processes, long precipitation time series with high temporal resolution are required. Due to limited availability of observed data such time series are typically obtained from stochastic models. However, most existing rainfall models are limited in their ability to conserve rainfall event statistics which are relevant for hydrological processes. Poisson rectangular pulse models are widely applied to generate long time series of alternating precipitation events durations and mean intensities as well as interstorm period durations. Multiplicative microcanonical random cascade (MRC) models are used to disaggregate precipitation time series from coarse to fine temporal resolution. To overcome the inconsistencies between the temporal structure of the Poisson rectangular pulse model and the MRC model, we developed a new coupling approach by introducing two modifications to the MRC model. These modifications comprise (a) a modified cascade model ("constrained cascade") which preserves the event durations generated by the Poisson rectangular model by constraining the first and last interval of a precipitation event to contain precipitation and (b) continuous sigmoid functions of the multiplicative weights to consider the scale-dependency in the disaggregation of precipitation events of different durations. The constrained cascade model was evaluated in its ability to disaggregate observed precipitation events in comparison to existing MRC models. For that, we used a 20-year record of hourly precipitation at six stations across Germany. The constrained cascade model showed a pronounced better agreement with the observed data in terms of both the temporal pattern of the precipitation time series (e.g. the dry and wet spell durations and autocorrelations) and event characteristics (e.g. intra-event intermittency and intensity fluctuation within events). The constrained cascade model also slightly outperformed the other MRC models with respect to the intensity-frequency relationship. To assess the performance of the coupled Poisson rectangular pulse and constrained cascade model, precipitation events were stochastically generated by the Poisson rectangular pulse model and then disaggregated by the constrained cascade model. We found that the coupled model performs satisfactorily in terms of the temporal pattern of the precipitation time series, event characteristics and the intensity-frequency relationship.
Hendricks, Susan; DeMeester, Deborah; Stephenson, Evelyn; Welch, Janet
2016-05-01
Understanding the strengths and challenges of various clinical models is important for nursing education. Three long-standing clinical models (preceptored, hybrid, and traditional) were compared on several outcome measures related to satisfaction, learning opportunities, and student outcomes. Students, faculty, and preceptors participated in this study. Although no differences were noted in satisfaction or standardized examination scores, students in the preceptored clinical model were able to practice more psychomotor skills. Although participants in the preceptored model reported spending more time communicating with staff nurses than did those in the other models, students in the traditional model spent more time with faculty. No differences were noted among groups in student clinical observation time. All clinical learning models were focused on how clinical time was structured, without an emphasis on how faculty and preceptors work with students to develop nursing clinical reasoning skills. Identifying methodology to impact thinking in the clinical environment is a key next step. [J Nurs Educ. 2016;55(5):271-277.]. Copyright 2016, SLACK Incorporated.
Real-time inversions for finite fault slip models and rupture geometry based on high-rate GPS data
Minson, Sarah E.; Murray, Jessica R.; Langbein, John O.; Gomberg, Joan S.
2015-01-01
We present an inversion strategy capable of using real-time high-rate GPS data to simultaneously solve for a distributed slip model and fault geometry in real time as a rupture unfolds. We employ Bayesian inference to find the optimal fault geometry and the distribution of possible slip models for that geometry using a simple analytical solution. By adopting an analytical Bayesian approach, we can solve this complex inversion problem (including calculating the uncertainties on our results) in real time. Furthermore, since the joint inversion for distributed slip and fault geometry can be computed in real time, the time required to obtain a source model of the earthquake does not depend on the computational cost. Instead, the time required is controlled by the duration of the rupture and the time required for information to propagate from the source to the receivers. We apply our modeling approach, called Bayesian Evidence-based Fault Orientation and Real-time Earthquake Slip, to the 2011 Tohoku-oki earthquake, 2003 Tokachi-oki earthquake, and a simulated Hayward fault earthquake. In all three cases, the inversion recovers the magnitude, spatial distribution of slip, and fault geometry in real time. Since our inversion relies on static offsets estimated from real-time high-rate GPS data, we also present performance tests of various approaches to estimating quasi-static offsets in real time. We find that the raw high-rate time series are the best data to use for determining the moment magnitude of the event, but slightly smoothing the raw time series helps stabilize the inversion for fault geometry.
Modeling species invasions in Ecopath with Ecosim: an evaluation using Laurentian Great Lakes models
Langseth, Brian J.; Rogers, Mark; Zhang, Hongyan
2012-01-01
Invasive species affect the structure and processes of ecosystems they invade. Invasive species have been particularly relevant to the Laurentian Great Lakes, where they have played a part in both historical and recent changes to Great Lakes food webs and the fisheries supported therein. There is increased interest in understanding the effects of ecosystem changes on fisheries within the Great Lakes, and ecosystem models provide an essential tool from which this understanding can take place. A commonly used model for exploring fisheries management questions within an ecosystem context is the Ecopath with Ecosim (EwE) modeling software. Incorporating invasive species into EwE models is a challenging process, and descriptions and comparisons of methods for modeling species invasions are lacking. We compared four methods for incorporating invasive species into EwE models for both Lake Huron and Lake Michigan based on the ability of each to reproduce patterns in observed data time series. The methods differed in whether invasive species biomass was forced in the model, the initial level of invasive species biomass at the beginning of time dynamic simulations, and the approach to cause invasive species biomass to increase at the time of invasion. The overall process of species invasion could be reproduced by all methods, but fits to observed time series varied among the methods and models considered. We recommend forcing invasive species biomass when model objectives are to understand ecosystem impacts in the past and when time series of invasive species biomass are available. Among methods where invasive species time series were not forced, mediating the strength of predator–prey interactions performed best for the Lake Huron model, but worse for the Lake Michigan model. Starting invasive species biomass at high values and then artificially removing biomass until the time of invasion performed well for both models, but was more complex than starting invasive species biomass at low values. In general, for understanding the effect of invasive species on future fisheries management actions, we recommend initiating invasive species biomass at low levels based on the greater simplicity and realism of the method compared to others.
NASA Astrophysics Data System (ADS)
Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.
2016-12-01
The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The results are intended to give practical insight into tradeoffs between rSAS model structure and skill, and define a new performance benchmark to which other transit time models can be compared.
NASA Astrophysics Data System (ADS)
Kavetski, Dmitri; Clark, Martyn P.
2010-10-01
Despite the widespread use of conceptual hydrological models in environmental research and operations, they remain frequently implemented using numerically unreliable methods. This paper considers the impact of the time stepping scheme on model analysis (sensitivity analysis, parameter optimization, and Markov chain Monte Carlo-based uncertainty estimation) and prediction. It builds on the companion paper (Clark and Kavetski, 2010), which focused on numerical accuracy, fidelity, and computational efficiency. Empirical and theoretical analysis of eight distinct time stepping schemes for six different hydrological models in 13 diverse basins demonstrates several critical conclusions. (1) Unreliable time stepping schemes, in particular, fixed-step explicit methods, suffer from troublesome numerical artifacts that severely deform the objective function of the model. These deformations are not rare isolated instances but can arise in any model structure, in any catchment, and under common hydroclimatic conditions. (2) Sensitivity analysis can be severely contaminated by numerical errors, often to the extent that it becomes dominated by the sensitivity of truncation errors rather than the model equations. (3) Robust time stepping schemes generally produce "better behaved" objective functions, free of spurious local optima, and with sufficient numerical continuity to permit parameter optimization using efficient quasi Newton methods. When implemented within a multistart framework, modern Newton-type optimizers are robust even when started far from the optima and provide valuable diagnostic insights not directly available from evolutionary global optimizers. (4) Unreliable time stepping schemes lead to inconsistent and biased inferences of the model parameters and internal states. (5) Even when interactions between hydrological parameters and numerical errors provide "the right result for the wrong reason" and the calibrated model performance appears adequate, unreliable time stepping schemes make the model unnecessarily fragile in predictive mode, undermining validation assessments and operational use. Erroneous or misleading conclusions of model analysis and prediction arising from numerical artifacts in hydrological models are intolerable, especially given that robust numerics are accepted as mainstream in other areas of science and engineering. We hope that the vivid empirical findings will encourage the conceptual hydrological community to close its Pandora's box of numerical problems, paving the way for more meaningful model application and interpretation.
A model of interval timing by neural integration.
Simen, Patrick; Balci, Fuat; de Souza, Laura; Cohen, Jonathan D; Holmes, Philip
2011-06-22
We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.
Modeling time-to-event (survival) data using classification tree analysis.
Linden, Ariel; Yarnold, Paul R
2017-12-01
Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.
Fatigue-life distributions for reaction time data.
Tejo, Mauricio; Niklitschek-Soto, Sebastián; Marmolejo-Ramos, Fernando
2018-06-01
The family of fatigue-life distributions is introduced as an alternative model of reaction time data. This family includes the shifted Wald distribution and a shifted version of the Birnbaum-Saunders distribution. Although the former has been proposed as a way to model reaction time data, the latter has not. Hence, we provide theoretical, mathematical and practical arguments in support of the shifted Birnbaum-Saunders as a suitable model of simple reaction times and associated cognitive mechanisms.
A new short-term forecasting model for the total electron content storm time disturbances
NASA Astrophysics Data System (ADS)
Tsagouri, Ioanna; Koutroumbas, Konstantinos; Elias, Panagiotis
2018-06-01
This paper aims to introduce a new model for the short-term forecast of the vertical Total Electron Content (vTEC). The basic idea of the proposed model lies on the concept of the Solar Wind driven autoregressive model for Ionospheric short-term Forecast (SWIF). In its original version, the model is operationally implemented in the DIAS system (
Koseki, Shige; Nonaka, Junko
2012-09-01
The objective of this study was to develop a probabilistic model to predict the end of lag time (λ) during the growth of Bacillus cereus vegetative cells as a function of temperature, pH, and salt concentration using logistic regression. The developed λ model was subsequently combined with a logistic differential equation to simulate bacterial numbers over time. To develop a novel model for λ, we determined whether bacterial growth had begun, i.e., whether λ had ended, at each time point during the growth kinetics. The growth of B. cereus was evaluated by optical density (OD) measurements in culture media for various pHs (5.5 ∼ 7.0) and salt concentrations (0.5 ∼ 2.0%) at static temperatures (10 ∼ 20°C). The probability of the end of λ was modeled using dichotomous judgments obtained at each OD measurement point concerning whether a significant increase had been observed. The probability of the end of λ was described as a function of time, temperature, pH, and salt concentration and showed a high goodness of fit. The λ model was validated with independent data sets of B. cereus growth in culture media and foods, indicating acceptable performance. Furthermore, the λ model, in combination with a logistic differential equation, enabled a simulation of the population of B. cereus in various foods over time at static and/or fluctuating temperatures with high accuracy. Thus, this newly developed modeling procedure enables the description of λ using observable environmental parameters without any conceptual assumptions and the simulation of bacterial numbers over time with the use of a logistic differential equation.
The fossilized birth–death process for coherent calibration of divergence-time estimates
Heath, Tracy A.; Huelsenbeck, John P.; Stadler, Tanja
2014-01-01
Time-calibrated species phylogenies are critical for addressing a wide range of questions in evolutionary biology, such as those that elucidate historical biogeography or uncover patterns of coevolution and diversification. Because molecular sequence data are not informative on absolute time, external data—most commonly, fossil age estimates—are required to calibrate estimates of species divergence dates. For Bayesian divergence time methods, the common practice for calibration using fossil information involves placing arbitrarily chosen parametric distributions on internal nodes, often disregarding most of the information in the fossil record. We introduce the “fossilized birth–death” (FBD) process—a model for calibrating divergence time estimates in a Bayesian framework, explicitly acknowledging that extant species and fossils are part of the same macroevolutionary process. Under this model, absolute node age estimates are calibrated by a single diversification model and arbitrary calibration densities are not necessary. Moreover, the FBD model allows for inclusion of all available fossils. We performed analyses of simulated data and show that node age estimation under the FBD model results in robust and accurate estimates of species divergence times with realistic measures of statistical uncertainty, overcoming major limitations of standard divergence time estimation methods. We used this model to estimate the speciation times for a dataset composed of all living bears, indicating that the genus Ursus diversified in the Late Miocene to Middle Pliocene. PMID:25009181
Use of space-time models to investigate the stability of patterns of disease.
Abellan, Juan Jose; Richardson, Sylvia; Best, Nicky
2008-08-01
The use of Bayesian hierarchical spatial models has become widespread in disease mapping and ecologic studies of health-environment associations. In this type of study, the data are typically aggregated over an extensive time period, thus neglecting the time dimension. The output of purely spatial disease mapping studies is therefore the average spatial pattern of risk over the period analyzed, but the results do not inform about, for example, whether a high average risk was sustained over time or changed over time. We investigated how including the time dimension in disease-mapping models strengthens the epidemiologic interpretation of the overall pattern of risk. We discuss a class of Bayesian hierarchical models that simultaneously characterize and estimate the stable spatial and temporal patterns as well as departures from these stable components. We show how useful rules for classifying areas as stable can be constructed based on the posterior distribution of the space-time interactions. We carry out a simulation study to investigate the sensitivity and specificity of the decision rules we propose, and we illustrate our approach in a case study of congenital anomalies in England. Our results confirm that extending hierarchical disease-mapping models to models that simultaneously consider space and time leads to a number of benefits in terms of interpretation and potential for detection of localized excesses.
NASA Astrophysics Data System (ADS)
Paouris, Evangelos; Mavromichalaki, Helen
2017-12-01
In a previous work (Paouris and Mavromichalaki in Solar Phys. 292, 30, 2017), we presented a total of 266 interplanetary coronal mass ejections (ICMEs) with as much information as possible. We developed a new empirical model for estimating the acceleration of these events in the interplanetary medium from this analysis. In this work, we present a new approach on the effective acceleration model (EAM) for predicting the arrival time of the shock that preceds a CME, using data of a total of 214 ICMEs. For the first time, the projection effects of the linear speed of CMEs are taken into account in this empirical model, which significantly improves the prediction of the arrival time of the shock. In particular, the mean value of the time difference between the observed time of the shock and the predicted time was equal to +3.03 hours with a mean absolute error (MAE) of 18.58 hours and a root mean squared error (RMSE) of 22.47 hours. After the improvement of this model, the mean value of the time difference is decreased to -0.28 hours with an MAE of 17.65 hours and an RMSE of 21.55 hours. This improved version was applied to a set of three recent Earth-directed CMEs reported in May, June, and July of 2017, and we compare our results with the values predicted by other related models.
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
Space-time correlations of fluctuating velocities in turbulent shear flows
NASA Astrophysics Data System (ADS)
Zhao, Xin; He, Guo-Wei
2009-04-01
Space-time correlations or Eulerian two-point two-time correlations of fluctuating velocities are analytically and numerically investigated in turbulent shear flows. An elliptic model for the space-time correlations in the inertial range is developed from the similarity assumptions on the isocorrelation contours: they share a uniform preference direction and a constant aspect ratio. The similarity assumptions are justified using the Kolmogorov similarity hypotheses and verified using the direct numerical simulation (DNS) of turbulent channel flows. The model relates the space-time correlations to the space correlations via the convection and sweeping characteristic velocities. The analytical expressions for the convection and sweeping velocities are derived from the Navier-Stokes equations for homogeneous turbulent shear flows, where the convection velocity is represented by the mean velocity and the sweeping velocity is the sum of the random sweeping velocity and the shear-induced velocity. This suggests that unlike Taylor’s model where the convection velocity is dominating and Kraichnan and Tennekes’ model where the random sweeping velocity is dominating, the decorrelation time scales of the space-time correlations in turbulent shear flows are determined by the convection velocity, the random sweeping velocity, and the shear-induced velocity. This model predicts a universal form of the space-time correlations with the two characteristic velocities. The DNS of turbulent channel flows supports the prediction: the correlation functions exhibit a fair good collapse, when plotted against the normalized space and time separations defined by the elliptic model.
Light-weight Parallel Python Tools for Earth System Modeling Workflows
NASA Astrophysics Data System (ADS)
Mickelson, S. A.; Paul, K.; Xu, H.; Dennis, J.; Brown, D. I.
2015-12-01
With the growth in computing power over the last 30 years, earth system modeling codes have become increasingly data-intensive. As an example, it is expected that the data required for the next Intergovernmental Panel on Climate Change (IPCC) Assessment Report (AR6) will increase by more than 10x to an expected 25PB per climate model. Faced with this daunting challenge, developers of the Community Earth System Model (CESM) have chosen to change the format of their data for long-term storage from time-slice to time-series, in order to reduce the required download bandwidth needed for later analysis and post-processing by climate scientists. Hence, efficient tools are required to (1) perform the transformation of the data from time-slice to time-series format and to (2) compute climatology statistics, needed for many diagnostic computations, on the resulting time-series data. To address the first of these two challenges, we have developed a parallel Python tool for converting time-slice model output to time-series format. To address the second of these challenges, we have developed a parallel Python tool to perform fast time-averaging of time-series data. These tools are designed to be light-weight, be easy to install, have very few dependencies, and can be easily inserted into the Earth system modeling workflow with negligible disruption. In this work, we present the motivation, approach, and testing results of these two light-weight parallel Python tools, as well as our plans for future research and development.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
A probabilistic method for constructing wave time-series at inshore locations using model scenarios
Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.
2014-01-01
Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.
Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition
NASA Astrophysics Data System (ADS)
Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.
2005-12-01
Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.
NASA Astrophysics Data System (ADS)
Erickson, M.; Olaguer, J.; Wijesinghe, A.; Colvin, J.; Neish, B.; Williams, J.
2014-12-01
It is becoming increasingly important to understand the emissions and health effects of industrial facilities. Many areas have no or limited sustained monitoring capabilities, making it difficult to quantify the major pollution sources affecting human health, especially in fence line communities. Developments in real-time monitoring and micro-scale modeling offer unique ways to tackle these complex issues. This presentation will demonstrate the capability of coupling real-time observations with micro-scale modeling to provide real-time information and near real-time source attribution. The Houston Advanced Research Center constructed the Mobile Acquisition of Real-time Concentrations (MARC) laboratory. MARC consists of a Ford E-350 passenger van outfitted with a Proton Transfer Reaction Mass Spectrometer (PTR-MS) and meteorological equipment. This allows for the fast measurement of various VOCs important to air quality. The data recorded from the van is uploaded to an off-site database and the information is broadcast to a website in real-time. This provides for off-site monitoring of MARC's observations, which allows off-site personnel to provide immediate input to the MARC operators on how to best achieve project objectives. The information stored in the database can also be used to provide near real-time source attribution. An inverse model has been used to ascertain the amount, location, and timing of emissions based on MARC measurements in the vicinity of industrial sites. The inverse model is based on a 3D micro-scale Eulerian forward and adjoint air quality model known as the HARC model. The HARC model uses output from the Quick Urban and Industrial Complex (QUIC) wind model and requires a 3D digital model of the monitored facility based on lidar or industrial permit data. MARC is one of the instrument platforms deployed during the 2014 Benzene and other Toxics Exposure Study (BEE-TEX) in Houston, TX. The main goal of the study is to quantify and explain the origin of ambient exposure to hazardous air pollutants in an industrial fence line community near the Houston Ship Channel. Preliminary results derived from analysis of MARC observations during the BEE-TEX experiment will be presented.
Kumaraswamy autoregressive moving average models for double bounded environmental data
NASA Astrophysics Data System (ADS)
Bayer, Fábio Mariano; Bayer, Débora Missio; Pumi, Guilherme
2017-12-01
In this paper we introduce the Kumaraswamy autoregressive moving average models (KARMA), which is a dynamic class of models for time series taking values in the double bounded interval (a,b) following the Kumaraswamy distribution. The Kumaraswamy family of distribution is widely applied in many areas, especially hydrology and related fields. Classical examples are time series representing rates and proportions observed over time. In the proposed KARMA model, the median is modeled by a dynamic structure containing autoregressive and moving average terms, time-varying regressors, unknown parameters and a link function. We introduce the new class of models and discuss conditional maximum likelihood estimation, hypothesis testing inference, diagnostic analysis and forecasting. In particular, we provide closed-form expressions for the conditional score vector and conditional Fisher information matrix. An application to environmental real data is presented and discussed.
Alteration of Box-Jenkins methodology by implementing genetic algorithm method
NASA Astrophysics Data System (ADS)
Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad
2015-02-01
A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.
Real-time numerical forecast of global epidemic spreading: case study of 2009 A/H1N1pdm.
Tizzoni, Michele; Bajardi, Paolo; Poletto, Chiara; Ramasco, José J; Balcan, Duygu; Gonçalves, Bruno; Perra, Nicola; Colizza, Vittoria; Vespignani, Alessandro
2012-12-13
Mathematical and computational models for infectious diseases are increasingly used to support public-health decisions; however, their reliability is currently under debate. Real-time forecasts of epidemic spread using data-driven models have been hindered by the technical challenges posed by parameter estimation and validation. Data gathered for the 2009 H1N1 influenza crisis represent an unprecedented opportunity to validate real-time model predictions and define the main success criteria for different approaches. We used the Global Epidemic and Mobility Model to generate stochastic simulations of epidemic spread worldwide, yielding (among other measures) the incidence and seeding events at a daily resolution for 3,362 subpopulations in 220 countries. Using a Monte Carlo Maximum Likelihood analysis, the model provided an estimate of the seasonal transmission potential during the early phase of the H1N1 pandemic and generated ensemble forecasts for the activity peaks in the northern hemisphere in the fall/winter wave. These results were validated against the real-life surveillance data collected in 48 countries, and their robustness assessed by focusing on 1) the peak timing of the pandemic; 2) the level of spatial resolution allowed by the model; and 3) the clinical attack rate and the effectiveness of the vaccine. In addition, we studied the effect of data incompleteness on the prediction reliability. Real-time predictions of the peak timing are found to be in good agreement with the empirical data, showing strong robustness to data that may not be accessible in real time (such as pre-exposure immunity and adherence to vaccination campaigns), but that affect the predictions for the attack rates. The timing and spatial unfolding of the pandemic are critically sensitive to the level of mobility data integrated into the model. Our results show that large-scale models can be used to provide valuable real-time forecasts of influenza spreading, but they require high-performance computing. The quality of the forecast depends on the level of data integration, thus stressing the need for high-quality data in population-based models, and of progressive updates of validated available empirical knowledge to inform these models.
Twelve years of succession on sandy substrates in a post-mining landscape: a Markov chain analysis.
Baasch, Annett; Tischew, Sabine; Bruelheide, Helge
2010-06-01
Knowledge of succession rates and pathways is crucial for devising restoration strategies for highly disturbed ecosystems such as surface-mined land. As these processes have often only been described in qualitative terms, we used Markov models to quantify transitions between successional stages. However, Markov models are often considered not attractive for some reasons, such as model assumptions (e.g., stationarity in space and time, or the high expenditure of time required to estimate successional transitions in the field). Here we present a solution for converting multivariate ecological time series into transition matrices and demonstrate the applicability of this approach for a data set that resulted from monitoring the succession of sandy dry grassland in a post-mining landscape. We analyzed five transition matrices, four one-step matrices referring to specific periods of transition (1995-1998, 1998-2001, 2001-2004, 2004-2007), and one matrix for the whole study period (stationary model, 1995-2007). Finally, the stationary model was enhanced to a partly time-variable model. Applying the stationary and the time-variable models, we started a prediction well outside our calibration period, beginning with 100% bare soil in 1974 as the known start of the succession, and generated the coverage of 12 predefined vegetation types in three-year intervals. Transitions among vegetation types changed significantly in space and over time. While the probability of colonization was almost constant over time, the replacement rate tended to increase, indicating that the speed of succession accelerated with time or fluctuations became stronger. The predictions of both models agreed surprisingly well with the vegetation data observed more than two decades later. This shows that our dry grassland succession in a post-mining landscape can be adequately described by comparably simple types of Markov models, although some model assumptions have not been fulfilled and within-plot transitions have not been observed with point exactness. The major achievement of our proposed way to convert vegetation time series into transition matrices is the estimation of probability of events--a strength not provided by other frequently used statistical methods in vegetation science.
[Rapid 3-Dimensional Models of Cerebral Aneurysm for Emergency Surgical Clipping].
Konno, Takehiko; Mashiko, Toshihiro; Oguma, Hirofumi; Kaneko, Naoki; Otani, Keisuke; Watanabe, Eiju
2016-08-01
We developed a method for manufacturing solid models of cerebral aneurysms, with a shorter printing time than that involved in conventional methods, using a compact 3D printer with acrylonitrile-butadiene-styrene(ABS)resin. We further investigated the application and utility of this printing system in emergency clipping surgery. A total of 16 patients diagnosed with acute subarachnoid hemorrhage resulting from cerebral aneurysm rupture were enrolled in the present study. Emergency clipping was performed on the day of hospitalization. Digital Imaging and Communication in Medicine(DICOM)data obtained from computed tomography angiography(CTA)scans were edited and converted to stereolithography(STL)file formats, followed by the production of 3D models of the cerebral aneurysm by using the 3D printer. The mean time from hospitalization to the commencement of surgery was 242 min, whereas the mean time required for manufacturing the 3D model was 67 min. The average cost of each 3D model was 194 Japanese Yen. The time required for manufacturing the 3D models shortened to approximately 1 hour with increasing experience of producing 3D models. Favorable impressions for the use of the 3D models in clipping were reported by almost all neurosurgeons included in this study. Although 3D printing is often considered to involve huge costs and long manufacturing time, the method used in the present study requires shorter time and lower costs than conventional methods for manufacturing 3D cerebral aneurysm models, thus making it suitable for use in emergency clipping.