Sample records for sequential indicator simulation

  1. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  2. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  3. Three-dimensional mapping of equiprobable hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirley, C.; Pohlmann, K.; Andricevic, R.

    1996-09-01

    Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less

  4. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data.

    PubMed

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-12

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  5. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  6. The use of sequential indicator simulation to characterize geostatistical uncertainty; Yucca Mountain Site Characterization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less

  7. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  8. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  9. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  10. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    PubMed

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  11. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    PubMed

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  13. Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping

    NASA Technical Reports Server (NTRS)

    Fujita, M.; Ulaby, F. (Principal Investigator)

    1982-01-01

    The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.

  14. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  15. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  16. CFD simulation of hemodynamics in sequential and individual coronary bypass grafts based on multislice CT scan datasets.

    PubMed

    Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh

    2012-01-01

    There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.

  17. Numerical simulation of double‐diffusive finger convection

    USGS Publications Warehouse

    Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard

    2005-01-01

    A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.

  18. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  19. Dosimetric effects of patient rotational setup errors on prostate IMRT treatments

    NASA Astrophysics Data System (ADS)

    Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.

    2006-10-01

    The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.

  20. Sequential Computerized Mastery Tests--Three Simulation Studies

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  1. Managing numerical errors in random sequential adsorption

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Nowak, Aleksandra

    2016-09-01

    Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

  2. Manual flying of curved precision approaches to landing with electromechanical instrumentation. A piloted simulation study

    NASA Technical Reports Server (NTRS)

    Knox, Charles E.

    1993-01-01

    A piloted simulation study was conducted to examine the requirements for using electromechanical flight instrumentation to provide situation information and flight guidance for manually controlled flight along curved precision approach paths to a landing. Six pilots were used as test subjects. The data from these tests indicated that flight director guidance is required for the manually controlled flight of a jet transport airplane on curved approach paths. Acceptable path tracking performance was attained with each of the three situation information algorithms tested. Approach paths with both multiple sequential turns and short final path segments were evaluated. Pilot comments indicated that all the approach paths tested could be used in normal airline operations.

  3. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  4. Formation of iron nanoparticles and increase in iron reactivity in mineral dust during simulated cloud processing.

    PubMed

    Shi, Zongbo; Krom, Michael D; Bonneville, Steeve; Baker, Alex R; Jickells, Timothy D; Benning, Liane G

    2009-09-01

    The formation of iron (Fe) nanoperticles and increase in Fe reactivity in mineral dust during simulated cloud processing was investigated using high-resolution microscopy and chemical extraction methods. Cloud processing of dust was experimentally simulated via an alternation of acidic (pH 2) and circumneutral conditions (pH 5-6) over periods of 24 h each on presieved (<20 microm) Saharan soil and goethite suspensions. Microscopic analyses of the processed soil and goethite samples reveal the neo-formation of Fe-rich nanoparticle aggregates, which were not found initially. Similar Fe-rich nanoparticles were also observed in wet-deposited Saharen dusts from the western Mediterranean but not in dry-deposited dust from the eastern Mediterranean. Sequential Fe extraction of the soil samples indicated an increase in the proportion of chemically reactive Fe extractable by an ascorbate solution after simulated cloud processing. In addition, the sequential extractions on the Mediterranean dust samples revealed a higher content of reactive Fe in the wet-deposited dust compared to that of the dry-deposited dust These results suggestthat large variations of pH commonly reported in aerosol and cloud waters can trigger neo-formation of nanosize Fe particles and an increase in Fe reactivity in the dust

  5. Sequential biases in accumulating evidence

    PubMed Central

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  6. Efficient Simulation Budget Allocation for Selecting an Optimal Subset

    NASA Technical Reports Server (NTRS)

    Chen, Chun-Hung; He, Donghai; Fu, Michael; Lee, Loo Hay

    2008-01-01

    We consider a class of the subset selection problem in ranking and selection. The objective is to identify the top m out of k designs based on simulated output. Traditional procedures are conservative and inefficient. Using the optimal computing budget allocation framework, we formulate the problem as that of maximizing the probability of correc tly selecting all of the top-m designs subject to a constraint on the total number of samples available. For an approximation of this corre ct selection probability, we derive an asymptotically optimal allocat ion and propose an easy-to-implement heuristic sequential allocation procedure. Numerical experiments indicate that the resulting allocatio ns are superior to other methods in the literature that we tested, and the relative efficiency increases for larger problems. In addition, preliminary numerical results indicate that the proposed new procedur e has the potential to enhance computational efficiency for simulation optimization.

  7. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    PubMed

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  8. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  9. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    PubMed

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  11. Multiuser signal detection using sequential decoding

    NASA Astrophysics Data System (ADS)

    Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.

    1990-05-01

    The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.

  12. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  13. Three-dimensional Stochastic Estimation of Porosity Distribution: Benefits of Using Ground-penetrating Radar Velocity Tomograms in Simulated-annealing-based or Bayesian Sequential Simulation Approaches

    DTIC Science & Technology

    2012-05-30

    annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4

  14. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  15. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT.

    PubMed

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas

    2016-01-01

    To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.

  16. Mechanisms of electron acceptor utilization: Implications for simulating anaerobic biodegradation

    USGS Publications Warehouse

    Schreiber, M.E.; Carey, G.R.; Feinstein, D.T.; Bahr, J.M.

    2004-01-01

    Simulation of biodegradation reactions within a reactive transport framework requires information on mechanisms of terminal electron acceptor processes (TEAPs). In initial modeling efforts, TEAPs were approximated as occurring sequentially, with the highest energy-yielding electron acceptors (e.g. oxygen) consumed before those that yield less energy (e.g., sulfate). Within this framework in a steady state plume, sequential electron acceptor utilization would theoretically produce methane at an organic-rich source and Fe(II) further downgradient, resulting in a limited zone of Fe(II) and methane overlap. However, contaminant plumes often display much more extensive zones of overlapping Fe(II) and methane. The extensive overlap could be caused by several abiotic and biotic processes including vertical mixing of byproducts in long-screened monitoring wells, adsorption of Fe(II) onto aquifer solids, or microscale heterogeneity in Fe(III) concentrations. Alternatively, the overlap could be due to simultaneous utilization of terminal electron acceptors. Because biodegradation rates are controlled by TEAPs, evaluating the mechanisms of electron acceptor utilization is critical for improving prediction of contaminant mass losses due to biodegradation. Using BioRedox-MT3DMS, a three-dimensional, multi-species reactive transport code, we simulated the current configurations of a BTEX plume and TEAP zones at a petroleum- contaminated field site in Wisconsin. Simulation results suggest that BTEX mass loss due to biodegradation is greatest under oxygen-reducing conditions, with smaller but similar contributions to mass loss from biodegradation under Fe(III)-reducing, sulfate-reducing, and methanogenic conditions. Results of sensitivity calculations document that BTEX losses due to biodegradation are most sensitive to the age of the plume, while the shape of the BTEX plume is most sensitive to effective porosity and rate constants for biodegradation under Fe(III)-reducing and methanogenic conditions. Using this transport model, we had limited success in simulating overlap of redox products using reasonable ranges of parameters within a strictly sequential electron acceptor utilization framework. Simulation results indicate that overlap of redox products cannot be accurately simulated using the constructed model, suggesting either that Fe(III) reduction and methanogenesis are occurring simultaneously in the source area, or that heterogeneities in Fe(III) concentration and/or mineral type cause the observed overlap. Additional field, experimental, and modeling studies will be needed to address these questions. ?? 2004 Elsevier B.V. All rights reserved.

  17. On the Lulejian-I Combat Model

    DTIC Science & Technology

    1976-08-01

    possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the

  18. Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes

    DTIC Science & Technology

    2016-06-01

    making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in

  19. Increasing efficiency of preclinical research by group sequential designs

    PubMed Central

    Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich

    2017-01-01

    Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371

  20. Simulation of Peptides at Aqueous Interfaces

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Wilson, M.; Chipot, C.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    Behavior of peptides at water-membrane interfaces is of great interest in studies on cellular transport and signaling, membrane fusion, and the action of toxins and antibiotics. Many peptides, which exist in water only as random coils, can form sequence-dependent, ordered structures at aqueous interfaces, incorporate into membranes and self-assembly into functional units, such as simple ion channels. Multi -nanosecond molecular dynamics simulations have been carried out to study the mechanism and energetics of interfacial folding of both non-polar and amphiphilic peptides, their insertion into membranes and association into higher-order structures. The simulations indicate that peptides fold non-sequentially, often through a series of amphiphilic intermediates. They further incorporate into the membrane in a preferred direction as folded monomers, and only then aggregate into dimers and, possibly, further into "dimers of dimers".

  1. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Sequential Dependencies in Driving

    ERIC Educational Resources Information Center

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  3. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  4. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  5. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

  6. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  7. High data rate coding for the space station telemetry links.

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  8. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  9. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations.

    PubMed

    Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng

    2018-05-03

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  10. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    PubMed Central

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  11. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  12. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  13. Comparison of Statistical Approaches Dealing with Time-dependent Confounding in Drug Effectiveness Studies

    PubMed Central

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen

    2017-01-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168

  14. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    PubMed

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  15. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    PubMed

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  16. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation

    PubMed Central

    Gaudrain, Etienne; Carlyon, Robert P.

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922

  17. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially.

    PubMed

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  18. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially

    PubMed Central

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080

  19. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  20. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  1. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2012-09-01

    Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.

  2. Cluster Correspondence Analysis.

    PubMed

    van de Velden, M; D'Enza, A Iodice; Palumbo, F

    2017-03-01

    A method is proposed that combines dimension reduction and cluster analysis for categorical data by simultaneously assigning individuals to clusters and optimal scaling values to categories in such a way that a single between variance maximization objective is achieved. In a unified framework, a brief review of alternative methods is provided and we show that the proposed method is equivalent to GROUPALS applied to categorical data. Performance of the methods is appraised by means of a simulation study. The results of the joint dimension reduction and clustering methods are compared with the so-called tandem approach, a sequential analysis of dimension reduction followed by cluster analysis. The tandem approach is conjectured to perform worse when variables are added that are unrelated to the cluster structure. Our simulation study confirms this conjecture. Moreover, the results of the simulation study indicate that the proposed method also consistently outperforms alternative joint dimension reduction and clustering methods.

  3. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  4. Improved coverage of cDNA-AFLP by sequential digestion of immobilized cDNA.

    PubMed

    Weiberg, Arne; Pöhler, Dirk; Morgenstern, Burkhard; Karlovsky, Petr

    2008-10-13

    cDNA-AFLP is a transcriptomics technique which does not require prior sequence information and can therefore be used as a gene discovery tool. The method is based on selective amplification of cDNA fragments generated by restriction endonucleases, electrophoretic separation of the products and comparison of the band patterns between treated samples and controls. Unequal distribution of restriction sites used to generate cDNA fragments negatively affects the performance of cDNA-AFLP. Some transcripts are represented by more than one fragment while other escape detection, causing redundancy and reducing the coverage of the analysis, respectively. With the goal of improving the coverage of cDNA-AFLP without increasing its redundancy, we designed a modified cDNA-AFLP protocol. Immobilized cDNA is sequentially digested with several restriction endonucleases and the released DNA fragments are collected in mutually exclusive pools. To investigate the performance of the protocol, software tool MECS (Multiple Enzyme cDNA-AFLP Simulation) was written in Perl. cDNA-AFLP protocols described in the literature and the new sequential digestion protocol were simulated on sets of cDNA sequences from mouse, human and Arabidopsis thaliana. The redundancy and coverage, the total number of PCR reactions, and the average fragment length were calculated for each protocol and cDNA set. Simulation revealed that sequential digestion of immobilized cDNA followed by the partitioning of released fragments into mutually exclusive pools outperformed other cDNA-AFLP protocols in terms of coverage, redundancy, fragment length, and the total number of PCRs. Primers generating 30 to 70 amplicons per PCR provided the highest fraction of electrophoretically distinguishable fragments suitable for normalization. For A. thaliana, human and mice transcriptome, the use of two marking enzymes and three sequentially applied releasing enzymes for each of the marking enzymes is recommended.

  5. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  6. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  7. A group sequential adaptive treatment assignment design for proof of concept and dose selection in headache trials.

    PubMed

    Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph

    2005-06-01

    The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.

  8. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  9. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  10. Use of general purpose graphics processing units with MODFLOW

    USGS Publications Warehouse

    Hughes, Joseph D.; White, Jeremy T.

    2013-01-01

    To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.

  11. Changes in thermo-tolerance and survival under simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 in chicken breast meat after exposure to sequential stresses.

    PubMed

    Melo, Adma Nadja Ferreira de; Souza, Geany Targino de; Schaffner, Donald; Oliveira, Tereza C Moreira de; Maciel, Janeeyre Ferreira; Souza, Evandro Leite de; Magnani, Marciane

    2017-06-19

    This study assessed changes in thermo-tolerance and capability to survive to simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 inoculated in chicken breast meat following exposure to stresses (cold, acid and osmotic) commonly imposed during food processing. The effects of the stress imposed by exposure to oregano (Origanum vulgare L.) essential oil (OVEO) on thermo-tolerance were also assessed. After exposure to cold stress (5°C for 5h) in chicken breast meat the test strains were sequentially exposed to the different stressing substances (lactic acid, NaCl or OVEO) at sub-lethal amounts, which were defined considering previously determined minimum inhibitory concentrations, and finally to thermal treatment (55°C for 30min). Resistant cells from distinct sequential treatments were exposed to simulated gastrointestinal conditions. The exposure to cold stress did not result in increased tolerance to acid stress (lactic acid: 5 and 2.5μL/g) for both strains. Cells of S. Typhimurium PT4 and S. Enteritidis PT4 previously exposed to acid stress showed higher (p<0.05) tolerance to osmotic stress (NaCl: 75 or 37.5mg/g) compared to non-acid-exposed cells. Exposure to osmotic stress without previous exposure to acid stress caused a salt-concentration dependent decrease in counts for both strains. Exposure to OVEO (1.25 and 0.62μL/g) decreased the acid and osmotic tolerance of both S. Enteritidis PT4 and S. Typhimurium PT4. Sequential exposure to acid and osmotic stress conditions after cold exposure increased (p<0.05) the thermo-tolerance in both strains. The cells that survived the sequential stress exposure (resistant) showed higher tolerance (p<0.05) to acidic conditions during continuous exposure (182min) to simulated gastrointestinal conditions. Resistant cells of S. Enteritidis PT4 and S. Typhimurium PT4 showed higher survival rates (p<0.05) than control cells at the end of the in vitro digestion. These results show that sequential exposure to multiple sub-lethal stresses may increase the thermo-tolerance and enhance the survival under gastrointestinal conditions of S. Enteritidis PT4 and S. Typhimurium PT4. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Architecture and design of a 500-MHz gallium-arsenide processing element for a parallel supercomputer

    NASA Technical Reports Server (NTRS)

    Fouts, Douglas J.; Butner, Steven E.

    1991-01-01

    The design of the processing element of GASP, a GaAs supercomputer with a 500-MHz instruction issue rate and 1-GHz subsystem clocks, is presented. The novel, functionally modular, block data flow architecture of GASP is described. The architecture and design of a GASP processing element is then presented. The processing element (PE) is implemented in a hybrid semiconductor module with 152 custom GaAs ICs of eight different types. The effects of the implementation technology on both the system-level architecture and the PE design are discussed. SPICE simulations indicate that parts of the PE are capable of being clocked at 1 GHz, while the rest of the PE uses a 500-MHz clock. The architecture utilizes data flow techniques at a program block level, which allows efficient execution of parallel programs while maintaining reasonably good performance on sequential programs. A simulation study of the architecture indicates that an instruction execution rate of over 30,000 MIPS can be attained with 65 PEs.

  13. Assessing carcinogenic risks associated with ingesting arsenic in farmed smeltfish (Ayu, Plecoglossus altirelis) in aseniasis-endemic area of Taiwan.

    PubMed

    Lee, Jin-Jing; Jang, Cheng-Shin; Liang, Ching-Ping; Liu, Chen-Wuing

    2008-09-15

    This study spatially analyzed potential carcinogenic risks associated with ingesting arsenic (As) contents in aquacultural smeltfish (Plecoglossus altirelis) from the Lanyang Plain of northeastern Taiwan. Sequential indicator simulation (SIS) was adopted to reproduce As exposure distributions in groundwater based on their three-dimensional variability. A target cancer risk (TR) associated with ingesting As in aquacultural smeltfish was employed to evaluate the potential risk to human health. The probabilistic risk assessment determined by Monte Carlo simulation and SIS is used to propagate properly the uncertainty of parameters. Safe and hazardous aquacultural regions were mapped to elucidate the safety of groundwater use. The TRs determined from the risks at the 95th percentiles exceed one millionth, indicating that ingesting smeltfish that are farmed in the highly As-affected regions represents a potential cancer threat to human health. The 95th percentile of TRs is considered in formulating a strategy for the aquacultural use of groundwater in the preliminary stage.

  14. A multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration.

    PubMed

    Goovaerts, P; Albuquerque, Teresa; Antunes, Margarida

    2016-11-01

    This paper describes a multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration, with an application to an abandoned sedimentary gold mining region in Portugal. The main challenge was the existence of only a dozen gold measurements confined to the grounds of the old gold mines, which precluded the application of traditional interpolation techniques, such as cokriging. The analysis could, however, capitalize on 376 stream sediment samples that were analyzed for twenty two elements. Gold (Au) was first predicted at all 376 locations using linear regression (R 2 =0.798) and four metals (Fe, As, Sn and W), which are known to be mostly associated with the local gold's paragenesis. One hundred realizations of the spatial distribution of gold content were generated using sequential indicator simulation and a soft indicator coding of regression estimates, to supplement the hard indicator coding of gold measurements. Each simulated map then underwent a local cluster analysis to identify significant aggregates of low or high values. The one hundred classified maps were processed to derive the most likely classification of each simulated node and the associated probability of occurrence. Examining the distribution of the hot-spots and cold-spots reveals a clear enrichment in Au along the Erges River downstream from the old sedimentary mineralization.

  15. Hierarchical Chunking of Sequential Memory on Neuromorphic Architecture with Reduced Synaptic Plasticity

    PubMed Central

    Li, Guoqi; Deng, Lei; Wang, Dong; Wang, Wei; Zeng, Fei; Zhang, Ziyang; Li, Huanglong; Song, Sen; Pei, Jing; Shi, Luping

    2016-01-01

    Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture. PMID:28066223

  16. Recent advances in lossless coding techniques

    NASA Astrophysics Data System (ADS)

    Yovanof, Gregory S.

    Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.

  17. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  18. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  19. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2013-01-01

    Recent observations show evidence that high-z (z ˜ 2-3) early-type galaxies (ETGs) are more compact than those with comparable mass at z ˜ 0. Such size evolution is most likely explained by the `dry merger sceanario'. However, previous studies based on this scenario cannot consistently explain the properties of both high-z compact massive ETGs and local ETGs. We investigate the effect of multiple sequential dry minor mergers on the size evolution of compact massive ETGs. From an analysis of the Millennium Simulation Data Base, we show that such minor (stellar mass ratio M2/M1 < 1/4) mergers are extremely common during hierarchical structure formation. We perform N-body simulations of sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. Typical mass ratios of these minor mergers are 1/20 < M2/M1 ≤q 1/10. We show that sequential minor mergers of compact satellite galaxies are the most efficient at promoting size growth and decreasing the velocity dispersion of compact massive ETGs in our simulations. The change of stellar size and density of the merger remnants is consistent with recent observations. Furthermore, we construct the merger histories of candidates for high-z compact massive ETGs using the Millennium Simulation Data Base and estimate the size growth of the galaxies through the dry minor merger scenario. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained during sequential minor mergers in our simulations. However, we note that our numerical result is only valid for merger histories with typical mass ratios between 1/20 and 1/10 with parabolic and head-on orbits and that our most efficient size-growth efficiency is likely an upper limit.

  20. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    NASA Astrophysics Data System (ADS)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.

  1. An Investigation of University Students' Collaborative Inquiry Learning Behaviors in an Augmented Reality Simulation and a Traditional Simulation

    ERIC Educational Resources Information Center

    Wang, Hung-Yuan; Duh, Henry Been-Lirn; Li, Nai; Lin, Tzung-Jin; Tsai, Chin-Chung

    2014-01-01

    The purpose of this study is to investigate and compare students' collaborative inquiry learning behaviors and their behavior patterns in an augmented reality (AR) simulation system and a traditional 2D simulation system. Their inquiry and discussion processes were analyzed by content analysis and lag sequential analysis (LSA). Forty…

  2. An Extension of a Parallel-Distributed Processing Framework of Reading Aloud in Japanese: Human Nonword Reading Accuracy Does Not Require a Sequential Mechanism

    ERIC Educational Resources Information Center

    Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun

    2017-01-01

    Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…

  3. Sprocket- Chain Simulation: Modelling and Simulation of a Multi Physics problem by sequentially coupling MotionSolve and nanoFluidX

    NASA Astrophysics Data System (ADS)

    Jayanthi, Aditya; Coker, Christopher

    2016-11-01

    In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.

  4. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  5. Memetic computing through bio-inspired heuristics integration with sequential quadratic programming for nonlinear systems arising in different physical models.

    PubMed

    Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela

    2016-01-01

    In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.

  6. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  7. Understanding and simulating the material behavior during multi-particle irradiations

    PubMed Central

    Mir, Anamul H.; Toulemonde, M.; Jegou, C.; Miro, S.; Serruys, Y.; Bouffard, S.; Peuget, S.

    2016-01-01

    A number of studies have suggested that the irradiation behavior and damage processes occurring during sequential and simultaneous particle irradiations can significantly differ. Currently, there is no definite answer as to why and when such differences are seen. Additionally, the conventional multi-particle irradiation facilities cannot correctly reproduce the complex irradiation scenarios experienced in a number of environments like space and nuclear reactors. Therefore, a better understanding of multi-particle irradiation problems and possible alternatives are needed. This study shows ionization induced thermal spike and defect recovery during sequential and simultaneous ion irradiation of amorphous silica. The simultaneous irradiation scenario is shown to be equivalent to multiple small sequential irradiation scenarios containing latent damage formation and recovery mechanisms. The results highlight the absence of any new damage mechanism and time-space correlation between various damage events during simultaneous irradiation of amorphous silica. This offers a new and convenient way to simulate and understand complex multi-particle irradiation problems. PMID:27466040

  8. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.

  9. Aerosol specification in single-column Community Atmosphere Model version 5

    DOE PAGES

    Lebassi-Habtezion, B.; Caldwell, P. M.

    2015-03-27

    Single-column model (SCM) capability is an important tool for general circulation model development. In this study, the SCM mode of version 5 of the Community Atmosphere Model (CAM5) is shown to handle aerosol initialization and advection improperly, resulting in aerosol, cloud-droplet, and ice crystal concentrations which are typically much lower than observed or simulated by CAM5 in global mode. This deficiency has a major impact on stratiform cloud simulations but has little impact on convective case studies because aerosol is currently not used by CAM5 convective schemes and convective cases are typically longer in duration (so initialization is less important).more » By imposing fixed aerosol or cloud-droplet and crystal number concentrations, the aerosol issues described above can be avoided. Sensitivity studies using these idealizations suggest that the Meyers et al. (1992) ice nucleation scheme prevents mixed-phase cloud from existing by producing too many ice crystals. Microphysics is shown to strongly deplete cloud water in stratiform cases, indicating problems with sequential splitting in CAM5 and the need for careful interpretation of output from sequentially split climate models. Droplet concentration in the general circulation model (GCM) version of CAM5 is also shown to be far too low (~ 25 cm −3) at the southern Great Plains (SGP) Atmospheric Radiation Measurement (ARM) site.« less

  10. Solar wind interaction with Venus and Mars in a parallel hybrid code

    NASA Astrophysics Data System (ADS)

    Jarvinen, Riku; Sandroos, Arto

    2013-04-01

    We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.

  11. Analyzing Communication Architectures Using Commercial Off-The-Shelf (COTS) Modeling and Simulation Tools

    DTIC Science & Technology

    1998-06-01

    4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup

  12. Spatial interpolation of forest conditions using co-conditional geostatistical simulation

    Treesearch

    H. Todd Mowrer

    2000-01-01

    In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...

  13. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, Rick; Wendt, Fabian; Musial, Walter

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less

  14. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  15. Desorption of water from hydrophilic MCM-41 mesopores: positron annihilation, FTIR and MD simulation studies.

    PubMed

    Maheshwari, Priya; Dutta, D; Muthulakshmi, T; Chakraborty, B; Raje, N; Pujari, P K

    2017-02-08

    The desorption mechanism of water from the hydrophilic mesopores of MCM-41 was studied using positron annihilation lifetime spectroscopy (PALS) and attenuated total reflection Fourier transform infrared spectroscopy supplemented with molecular dynamics (MD) simulation. PALS results indicated that water molecules do not undergo sequential evaporation in a simple layer-by-layer manner during desorption from MCM-41 mesopores. The results suggested that the water column inside the uniform cylindrical mesopore become stretched during desorption and induces cavitation (as seen in the case of ink-bottle type pores) inside it, keeping a dense water layer at the hydrophilic pore wall, as well as a water plug at both the open ends of the cylindrical pore, until the water was reduced to a certain volume fraction where the pore catastrophically empties. Before being emptied, the water molecules formed clusters inside the mesopores. The formation of molecular clusters below a certain level of hydration was corroborated by the MD simulation study. The results are discussed.

  16. An energy function for dynamics simulations of polypeptides in torsion angle space

    NASA Astrophysics Data System (ADS)

    Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.

    1998-05-01

    Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.

  17. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  18. MaMiCo: Software design for parallel molecular-continuum flow simulations

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim

    2016-03-01

    The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.

  19. A multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration

    PubMed Central

    Goovaerts, P.; Albuquerque, Teresa; Antunes, Margarida

    2015-01-01

    This paper describes a multivariate geostatistical methodology to delineate areas of potential interest for future sedimentary gold exploration, with an application to an abandoned sedimentary gold mining region in Portugal. The main challenge was the existence of only a dozen gold measurements confined to the grounds of the old gold mines, which precluded the application of traditional interpolation techniques, such as cokriging. The analysis could, however, capitalize on 376 stream sediment samples that were analyzed for twenty two elements. Gold (Au) was first predicted at all 376 locations using linear regression (R2=0.798) and four metals (Fe, As, Sn and W), which are known to be mostly associated with the local gold’s paragenesis. One hundred realizations of the spatial distribution of gold content were generated using sequential indicator simulation and a soft indicator coding of regression estimates, to supplement the hard indicator coding of gold measurements. Each simulated map then underwent a local cluster analysis to identify significant aggregates of low or high values. The one hundred classified maps were processed to derive the most likely classification of each simulated node and the associated probability of occurrence. Examining the distribution of the hot-spots and cold-spots reveals a clear enrichment in Au along the Erges River downstream from the old sedimentary mineralization. PMID:27777638

  20. Dispersion and alignment of nanorods in cylindrical block copolymer thin films.

    PubMed

    Rasin, Boris; Chao, Huikuan; Jiang, Guoqian; Wang, Dongliang; Riggleman, Robert A; Composto, Russell J

    2016-02-21

    Although significant progress has been made in controlling the dispersion of spherical nanoparticles in block copolymer thin films, our ability to disperse and control the assembly of anisotropic nanoparticles into well-defined structures is lacking in comparison. Here we use a combination of experiments and field theoretic simulations to examine the assembly of gold nanorods (AuNRs) in a block copolymer. Experimentally, poly(2-vinylpyridine)-grafted AuNRs (P2VP-AuNRs) are incorporated into poly(styrene)-b-poly(2-vinylpyridine) (PS-b-P2VP) thin films with a vertical cylinder morphology. At sufficiently low concentrations, the AuNRs disperse in the block copolymer thin film. For these dispersed AuNR systems, atomic force microscopy combined with sequential ultraviolet ozone etching indicates that the P2VP-AuNRs segregate to the base of the P2VP cylinders. Furthermore, top-down transmission electron microscopy imaging shows that the P2VP-AuNRs mainly lie parallel to the substrate. Our field theoretic simulations indicate that the NRs are strongly attracted to the cylinder base where they can relieve the local stretching of the minority block of the copolymer. These simulations also indicate conditions that will drive AuNRs to adopt a vertical orientation, namely by increasing nanorod length and/or reducing the wetting of the short block towards the substrate.

  1. Spacecraft Data Simulator for the test of level zero processing systems

    NASA Technical Reports Server (NTRS)

    Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem

    1994-01-01

    The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.

  2. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  4. Simulations and experiments of aperiodic and multiplexed gratings in volume holographic imaging systems

    PubMed Central

    Luo, Yuan; Castro, Jose; Barton, Jennifer K.; Kostuk, Raymond K.; Barbastathis, George

    2010-01-01

    A new methodology describing the effects of aperiodic and multiplexed gratings in volume holographic imaging systems (VHIS) is presented. The aperiodic gratings are treated as an ensemble of localized planar gratings using coupled wave methods in conjunction with sequential and non-sequential ray-tracing techniques to accurately predict volumetric diffraction effects in VHIS. Our approach can be applied to aperiodic, multiplexed gratings and used to theoretically predict the performance of multiplexed volume holographic gratings within a volume hologram for VHIS. We present simulation and experimental results for the aperiodic and multiplexed imaging gratings formed in PQ-PMMA at 488nm and probed with a spherical wave at 633nm. Simulation results based on our approach that can be easily implemented in ray-tracing packages such as Zemax® are confirmed with experiments and show proof of consistency and usefulness of the proposed models. PMID:20940823

  5. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  6. Use of Computer Simulation in Designing and Evaluating a Proposed Rough Mill for Furniture Interior Parts

    Treesearch

    Philip A. Araman

    1977-01-01

    The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...

  7. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  8. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less

  9. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE PAGES

    Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...

    2016-01-29

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  10. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shuozhi; Xiong, Liming; Chen, Youping

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  11. Static reservoir modeling of the Bahariya reservoirs for the oilfields development in South Umbarka area, Western Desert, Egypt

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Mohamed I.; Metwalli, Farouk I.; Mesilhi, El Sayed I.

    2018-02-01

    3D static reservoir modeling of the Bahariya reservoirs using seismic and wells data can be a relevant part of an overall strategy for the oilfields development in South Umbarka area (Western Desert, Egypt). The seismic data is used to build the 3D grid, including fault sticks for the fault modeling, and horizon interpretations and surfaces for horizon modeling. The 3D grid is the digital representation of the structural geology of Bahariya Formation. When we got a reasonably accurate representation, we fill the 3D grid with facies and petrophysical properties to simulate it, to gain a more precise understanding of the reservoir properties behavior. Sequential Indicator Simulation (SIS) and Sequential Gaussian Simulation (SGS) techniques are the stochastic algorithms used to spatially distribute discrete reservoir properties (facies) and continuous reservoir properties (shale volume, porosity, and water saturation) respectively within the created 3D grid throughout property modeling. The structural model of Bahariya Formation exhibits the trapping mechanism which is a fault assisted anticlinal closure trending NW-SE. This major fault breaks the reservoirs into two major fault blocks (North Block and South Block). Petrophysical models classified Lower Bahariya reservoir as a moderate to good reservoir rather than Upper Bahariya reservoir in terms of facies, with good porosity and permeability, low water saturation, and moderate net to gross. The Original Oil In Place (OOIP) values of modeled Bahariya reservoirs show hydrocarbon accumulation in economic quantity, considering the high structural dips at the central part of South Umbarka area. The powerful of 3D static modeling technique has provided a considerable insight into the future prediction of Bahariya reservoirs performance and production behavior.

  12. Investigation of flow and transport processes at the MADE site using ensemble Kalman filter

    USGS Publications Warehouse

    Liu, Gaisheng; Chen, Y.; Zhang, Dongxiao

    2008-01-01

    In this work the ensemble Kalman filter (EnKF) is applied to investigate the flow and transport processes at the macro-dispersion experiment (MADE) site in Columbus, MS. The EnKF is a sequential data assimilation approach that adjusts the unknown model parameter values based on the observed data with time. The classic advection-dispersion (AD) and the dual-domain mass transfer (DDMT) models are employed to analyze the tritium plume during the second MADE tracer experiment. The hydraulic conductivity (K), longitudinal dispersivity in the AD model, and mass transfer rate coefficient and mobile porosity ratio in the DDMT model, are estimated in this investigation. Because of its sequential feature, the EnKF allows for the temporal scaling of transport parameters during the tritium concentration analysis. Inverse simulation results indicate that for the AD model to reproduce the extensive spatial spreading of the tritium observed in the field, the K in the downgradient area needs to be increased significantly. The estimated K in the AD model becomes an order of magnitude higher than the in situ flowmeter measurements over a large portion of media. On the other hand, the DDMT model gives an estimation of K that is much more comparable with the flowmeter values. In addition, the simulated concentrations by the DDMT model show a better agreement with the observed values. The root mean square (RMS) between the observed and simulated tritium plumes is 0.77 for the AD model and 0.45 for the DDMT model at 328 days. Unlike the AD model, which gives inconsistent K estimates at different times, the DDMT model is able to invert the K values that consistently reproduce the observed tritium concentrations through all times. ?? 2008 Elsevier Ltd. All rights reserved.

  13. An analog scrambler for speech based on sequential permutations in time and frequency

    NASA Astrophysics Data System (ADS)

    Cox, R. V.; Jayant, N. S.; McDermott, B. J.

    Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.

  14. Performance evaluation of an asynchronous multisensor track fusion filter

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.; Gray, John E.; McCabe, D. H.

    2003-08-01

    Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.

  15. Modeling of a Sequential Two-Stage Combustor

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.

    2005-01-01

    A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.

  16. Simulation modeling analysis of sequential relations among therapeutic alliance, symptoms, and adherence to child-centered play therapy between a child with autism spectrum disorder and two therapists.

    PubMed

    Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura

    2017-07-01

    This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.

  17. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    PubMed

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.

  18. Evaluation of Ophthalmic Surgical Instrument Sterility Using Short-Cycle Sterilization for Sequential Same-Day Use.

    PubMed

    Chang, David F; Hurley, Nikki; Mamalis, Nick; Whitman, Jeffrey

    2018-03-27

    The common practice of short-cycle sterilization for ophthalmic surgical instrumentation has come under increased regulatory scrutiny. This study was undertaken to evaluate the efficacy of short-cycle sterilization processing for consecutive same-day cataract procedures. Testing of specific sterilization processing methods by an independent medical device validation testing laboratory. Phaco handpieces from 3 separate manufacturers were tested along with appropriate biologic indicators and controls using 2 common steam sterilizers. A STATIM 2000 sterilizer (SciCan, Canonsburg, PA) with the STATIM metal cassette, and an AMSCO Century V116 pre-vacuum sterilizer (STERIS, Mentor, OH) using a Case Medical SteriTite container (Case Medical, South Hackensack, NJ) rigid container were tested using phaco tips and handpieces from 3 different manufacturers. Biological indicators were inoculated with highly resistant Geobacillus stearothermophilus, and each sterility verification test was performed in triplicate. Both wrapped and contained loads were tested with full dry cycles and a 7-day storage time to simulate prolonged storage. In adherence with the manufacturers' instructions for use (IFU), short cycles (3.0-3.5-minute exposure times) for unwrapped and contained loads were also tested after only 1 minute of dry time to simulate use on a consecutive case. Additional studies were performed to demonstrate whether any moisture present in the load containing phaco handpieces postprocessing was sterile and would affect the sterility of the contents after a 3-minute transit/storage time. This approximated the upper limit of time needed to transfer a containment device to the operating room. Presence or absence of microbial growth from cultured test samples. All inoculated test samples from both sterilizers were negative for growth of the target organism whether the full dry phase was interrupted or not. Pipetted postprocessing moisture samples and swabs of the handpieces were also negative for growth after a 3-minute transit/storage time. These studies support the use of unwrapped, short-cycle sterilization that adheres to the IFU of these 2 popular Food and Drug Administration-cleared sterilizers for sequential same-day cataract surgeries. A full drying phase is not necessary when the instruments are kept within the covered sterilizer containment device for prompt use on a sequential case. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  19. Development, Characterization, and Resultant Properties of a Carbon, Boron, and Chromium Ternary Diffusion System

    NASA Astrophysics Data System (ADS)

    Domec, Brennan S.

    In today's industry, engineering materials are continuously pushed to the limits. Often, the application only demands high-specification properties in a narrowly-defined region of the material, such as the outermost surface. This, in combination with the economic benefits, makes case hardening an attractive solution to meet industry demands. While case hardening has been in use for decades, applications demanding high hardness, deep case depth, and high corrosion resistance are often under-served by this process. Instead, new solutions are required. The goal of this study is to develop and characterize a new borochromizing process applied to a pre-carburized AISI 8620 alloy steel. The process was successfully developed using a combination of computational simulations, calculations, and experimental testing. Process kinetics were studied by fitting case depth measurement data to Fick's Second Law of Diffusion and an Arrhenius equation. Results indicate that the kinetics of the co-diffusion method are unaffected by the addition of chromium to the powder pack. The results also show that significant structural degradation of the case occurs when chromizing is applied sequentially to an existing boronized case. The amount of degradation is proportional to the chromizing parameters. Microstructural evolution was studied using metallographic methods, simulation and computational calculations, and analytical techniques. While the co-diffusion process failed to enrich the substrate with chromium, significant enrichment is obtained with the sequential diffusion process. The amount of enrichment is directly proportional to the chromizing parameters with higher parameters resulting in more enrichment. The case consists of M7C3 and M23C6 carbides nearest the surface, minor amounts of CrB, and a balance of M2B. Corrosion resistance was measured with salt spray and electrochemical methods. These methods confirm the benefit of surface enrichment by chromium in the sequential diffusion method with corrosion resistance increasing directly with chromium concentration. The results also confirm the deleterious effect of surface-breaking case defects and the need to reduce or eliminate them. The best combination of microstructural integrity, mean surface hardness, effective case depth, and corrosion resistance is obtained in samples sequentially boronized and chromized at 870°C for 6hrs. Additional work is required to further optimize process parameters and case properties.

  20. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  1. Simulation of diurnal thermal energy storage systems: Preliminary results

    NASA Astrophysics Data System (ADS)

    Katipamula, S.; Somasundaram, S.; Williams, H. R.

    1994-12-01

    This report describes the results of a simulation of thermal energy storage (TES) integrated with a simple-cycle gas turbine cogeneration system. Integrating TES with cogeneration can serve the electrical and thermal loads independently while firing all fuel in the gas turbine. The detailed engineering and economic feasibility of diurnal TES systems integrated with cogeneration systems has been described in two previous PNL reports. The objective of this study was to lay the ground work for optimization of the TES system designs using a simulation tool called TRNSYS (TRaNsient SYstem Simulation). TRNSYS is a transient simulation program with a sequential-modular structure developed at the Solar Energy Laboratory, University of Wisconsin-Madison. The two TES systems selected for the base-case simulations were: (1) a one-tank storage model to represent the oil/rock TES system; and (2) a two-tank storage model to represent the molten nitrate salt TES system. Results of the study clearly indicate that an engineering optimization of the TES system using TRNSYS is possible. The one-tank stratified oil/rock storage model described here is a good starting point for parametric studies of a TES system. Further developments to the TRNSYS library of available models (economizer, evaporator, gas turbine, etc.) are recommended so that the phase-change processes is accurately treated.

  2. Effects of linking a soil-water-balance model with a groundwater-flow model

    USGS Publications Warehouse

    Stanton, Jennifer S.; Ryter, Derek W.; Peterson, Steven M.

    2013-01-01

    A previously published regional groundwater-flow model in north-central Nebraska was sequentially linked with the recently developed soil-water-balance (SWB) model to analyze effects to groundwater-flow model parameters and calibration results. The linked models provided a more detailed spatial and temporal distribution of simulated recharge based on hydrologic processes, improvement of simulated groundwater-level changes and base flows at specific sites in agricultural areas, and a physically based assessment of the relative magnitude of recharge for grassland, nonirrigated cropland, and irrigated cropland areas. Root-mean-squared (RMS) differences between the simulated and estimated or measured target values for the previously published model and linked models were relatively similar and did not improve for all types of calibration targets. However, without any adjustment to the SWB-generated recharge, the RMS difference between simulated and estimated base-flow target values for the groundwater-flow model was slightly smaller than for the previously published model, possibly indicating that the volume of recharge simulated by the SWB code was closer to actual hydrogeologic conditions than the previously published model provided. Groundwater-level and base-flow hydrographs showed that temporal patterns of simulated groundwater levels and base flows were more accurate for the linked models than for the previously published model at several sites, particularly in agricultural areas.

  3. Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set

    USGS Publications Warehouse

    Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.

    1996-01-01

    This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.

  4. Sequential causal inference: Application to randomized trials of adaptive treatment strategies

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2009-01-01

    SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714

  5. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  6. SPMBR: a scalable algorithm for mining sequential patterns based on bitmaps

    NASA Astrophysics Data System (ADS)

    Xu, Xiwei; Zhang, Changhai

    2013-12-01

    Now some sequential patterns mining algorithms generate too many candidate sequences, and increase the processing cost of support counting. Therefore, we present an effective and scalable algorithm called SPMBR (Sequential Patterns Mining based on Bitmap Representation) to solve the problem of mining the sequential patterns for large databases. Our method differs from previous related works of mining sequential patterns. The main difference is that the database of sequential patterns is represented by bitmaps, and a simplified bitmap structure is presented firstly. In this paper, First the algorithm generate candidate sequences by SE(Sequence Extension) and IE(Item Extension), and then obtain all frequent sequences by comparing the original bitmap and the extended item bitmap .This method could simplify the problem of mining the sequential patterns and avoid the high processing cost of support counting. Both theories and experiments indicate that the performance of SPMBR is predominant for large transaction databases, the required memory size for storing temporal data is much less during mining process, and all sequential patterns can be mined with feasibility.

  7. Sequential interactions-in which one player plays first and another responds-promote cooperation in evolutionary-dynamical simulations of single-shot Prisoner's Dilemma and Snowdrift games.

    PubMed

    Laird, Robert A

    2018-09-07

    Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Developing and Implementing a Framework of Participatory Simulation for Mobile Learning Using Scaffolding

    ERIC Educational Resources Information Center

    Yin, Chengjiu; Song, Yanjie; Tabata, Yoshiyuki; Ogata, Hiroaki; Hwang, Gwo-Jen

    2013-01-01

    This paper proposes a conceptual framework, scaffolding participatory simulation for mobile learning (SPSML), used on mobile devices for helping students learn conceptual knowledge in the classroom. As the pedagogical design, the framework adopts an experiential learning model, which consists of five sequential but cyclic steps: the initial stage,…

  9. Similar Neural Correlates for Language and Sequential Learning: Evidence from Event-Related Brain Potentials

    PubMed Central

    Christiansen, Morten H.; Conway, Christopher M.; Onnis, Luca

    2011-01-01

    We used event-related potentials (ERPs) to investigate the time course and distribution of brain activity while adults performed (a) a sequential learning task involving complex structured sequences, and (b) a language processing task. The same positive ERP deflection, the P600 effect, typically linked to difficult or ungrammatical syntactic processing, was found for structural incongruencies in both sequential learning as well as natural language, and with similar topographical distributions. Additionally, a left anterior negativity (LAN) was observed for language but not for sequential learning. These results are interpreted as an indication that the P600 provides an index of violations and the cost of integration of expectations for upcoming material when processing complex sequential structure. We conclude that the same neural mechanisms may be recruited for both syntactic processing of linguistic stimuli and sequential learning of structured sequence patterns more generally. PMID:23678205

  10. Double-blind photo lineups using actual eyewitnesses: an experimental test of a sequential versus simultaneous lineup procedure.

    PubMed

    Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E

    2015-02-01

    Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.

  11. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  12. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  13. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  14. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  15. Dynamics of the GB3 loop regions from MD simulation: how much of it is real?

    PubMed

    Li, Tong; Jing, Qingqing; Yao, Lishan

    2011-04-07

    A total of 1.1 μs of molecular dynamics (MD) simulations were performed to study the structure and dynamics of protein GB3. The simulation motional amplitude of the loop regions is generally overestimated in comparison with the experimental backbone N-H order parameters S(2). Two-state behavior is observed for several residues in these regions, with the minor state population in the range of 3-13%. Further inspection suggests that the (φ, ψ) dihedral angles of the minor states deviate from the GB3 experimental values, implying the existence of nonnative states. After fitting the MD trajectories of these residues to the NMR RDCs, the minor state populations are significantly reduced by at least 80%, suggesting that MD simulations are strongly biased toward the minor states, thus overestimating the dynamics of the loop regions. The optimized trajectories produce intra, sequential H(N)-H(α) RDCs and intra (3)J(HNHα) that are not included in the trajectories fitting for these residues that are closer to the experimental data. Unlike GB3, 0.55 μs MD simulations of protein ubiquitin do not show distinctive minor states, and the derived NMR order parameters are better converged. Our findings indicate that the artifacts of the simulations depend on the specific system studied and that one should be cautious interpreting the enhanced dihedral dynamics from long MD simulations.

  16. Some theoretical issues on computer simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.L.; Reidys, C.M.

    1998-02-01

    The subject of this paper is the development of mathematical foundations for a theory of simulation. Sequentially updated cellular automata (sCA) over arbitrary graphs are employed as a paradigmatic framework. In the development of the theory, the authors focus on the properties of causal dependencies among local mappings in a simulation. The main object of and study is the mapping between a graph representing the dependencies among entities of a simulation and a representing the equivalence classes of systems obtained by all possible updates.

  17. The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups.

    PubMed

    Flowe, Heather D; Ebbesen, Ebbe B

    2007-02-01

    Two experiments investigated whether remembering is affected by the similarity of the study face relative to the alternatives in a lineup. In simultaneous and sequential lineups, choice rates and false alarms were larger in low compared to high similarity lineups, indicating criterion placement was affected by lineup similarity structure (Experiment 1). In Experiment 2, foil choices and similarity ranking data for target present lineups were compared to responses made when the target was removed from the lineup (only the 5 foils were presented). The results indicated that although foils were selected more often in target-removed lineups in the simultaneous compared to the sequential condition, responses shifted from the target to one of the foils at equal rates across lineup procedures.

  18. Predicting trends of invasive plants richness using local socio-economic data: An application in North Portugal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, Mario, E-mail: mgsantoss@gmail.com; Freitas, Raul, E-mail: raulfreitas@portugalmail.com; Crespi, Antonio L., E-mail: aluis.crespi@gmail.com

    2011-10-15

    This study assesses the potential of an integrated methodology for predicting local trends in invasive exotic plant species (invasive richness) using indirect, regional information on human disturbance. The distribution of invasive plants was assessed in North Portugal using herbarium collections and local environmental, geophysical and socio-economic characteristics. Invasive richness response to anthropogenic disturbance was predicted using a dynamic model based on a sequential modeling process (stochastic dynamic methodology-StDM). Derived scenarios showed that invasive richness trends were clearly associated with ongoing socio-economic change. Simulations including scenarios of growing urbanization showed an increase in invasive richness while simulations in municipalities with decreasingmore » populations showed stable or decreasing levels of invasive richness. The model simulations demonstrate the interest and feasibility of using this methodology in disturbance ecology. - Highlights: {yields} Socio-economic data indicate human induced disturbances. {yields} Socio-economic development increase disturbance in ecosystems. {yields} Disturbance promotes opportunities for invasive plants.{yields} Increased opportunities promote richness of invasive plants.{yields} Increase in richness of invasive plants change natural ecosystems.« less

  19. Sequential-Simultaneous Processing and Reading Skills in Primary Grade Children.

    ERIC Educational Resources Information Center

    McRae, Sandra G.

    1986-01-01

    The study examined relationships between two modes of information processing, simultaneous and sequential, and two sets of reading skills, word recognition and comprehension, among 40 second and third grade students. Results indicated there is a relationship between simultaneous processing and reading comprehension. (Author)

  20. Blocking for Sequential Political Experiments

    PubMed Central

    Moore, Sally A.

    2013-01-01

    In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061

  1. Constrained multiple indicator kriging using sequential quadratic programming

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Erhan Tercan, A.

    2012-11-01

    Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.

  2. Risk-Constrained Dynamic Programming for Optimal Mars Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Ono, Masahiro; Kuwata, Yoshiaki

    2013-01-01

    A chance-constrained dynamic programming algorithm was developed that is capable of making optimal sequential decisions within a user-specified risk bound. This work handles stochastic uncertainties over multiple stages in the CEMAT (Combined EDL-Mobility Analyses Tool) framework. It was demonstrated by a simulation of Mars entry, descent, and landing (EDL) using real landscape data obtained from the Mars Reconnaissance Orbiter. Although standard dynamic programming (DP) provides a general framework for optimal sequential decisionmaking under uncertainty, it typically achieves risk aversion by imposing an arbitrary penalty on failure states. Such a penalty-based approach cannot explicitly bound the probability of mission failure. A key idea behind the new approach is called risk allocation, which decomposes a joint chance constraint into a set of individual chance constraints and distributes risk over them. The joint chance constraint was reformulated into a constraint on an expectation over a sum of an indicator function, which can be incorporated into the cost function by dualizing the optimization problem. As a result, the chance-constraint optimization problem can be turned into an unconstrained optimization over a Lagrangian, which can be solved efficiently using a standard DP approach.

  3. Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Tsakmalis, Anestis; Chatzinotas, Symeon; Ottersten, Bjorn

    2018-02-01

    In this paper, a sequential probing method for interference constraint learning is proposed to allow a centralized Cognitive Radio Network (CRN) accessing the frequency band of a Primary User (PU) in an underlay cognitive scenario with a designed PU protection specification. The main idea is that the CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire the binary ACK/NACK packet. This feedback indicates whether the probing-induced interference is harmful or not and can be used to learn the PU interference constraint. The cognitive part of this sequential probing process is the selection of the power levels of the Secondary Users (SUs) which aims to learn the PU interference constraint with a minimum number of probing attempts while setting a limit on the number of harmful probing-induced interference events or equivalently of NACK packet observations over a time window. This constrained design problem is studied within the Active Learning (AL) framework and an optimal solution is derived and implemented with a sophisticated, accurate and fast Bayesian Learning method, the Expectation Propagation (EP). The performance of this solution is also demonstrated through numerical simulations and compared with modified versions of AL techniques we developed in earlier work.

  4. Modeling effluent distribution and nitrate transport through an on-site wastewater system.

    PubMed

    Hassan, G; Reneau, R B; Hagedorn, C; Jantrania, A R

    2008-01-01

    Properly functioning on-site wastewater systems (OWS) are an integral component of the wastewater system infrastructure necessary to renovate wastewater before it reaches surface or ground waters. There are a large number of factors, including soil hydraulic properties, effluent quality and dispersal, and system design, that affect OWS function. The ability to evaluate these factors using a simulation model would improve the capability to determine the impact of wastewater application on the subsurface soil environment. An existing subsurface drip irrigation system (SDIS) dosed with sequential batch reactor effluent (SBRE) was used in this study. This system has the potential to solve soil and site problems that limit OWS and to reduce the potential for environmental degradation. Soil water potentials (Psi(s)) and nitrate (NO(3)) migration were simulated at 55- and 120-cm depths within and downslope of the SDIS using a two-dimensional code in HYDRUS-3D. Results show that the average measured Psi(s) were -121 and -319 cm, whereas simulated values were -121 and -322 cm at 55- and 120-cm depths, respectively, indicating unsaturated conditions. Average measured NO(3) concentrations were 0.248 and 0.176 mmol N L(-1), whereas simulated values were 0.237 and 0.152 mmol N L(-1) at 55- and 120-cm depths, respectively. Observed unsaturated conditions decreased the potential for NO(3) to migrate in more concentrated plumes away from the SDIS. The agreement (high R(2) values approximately 0.97) between the measured and simulated Psi(s) and NO(3) concentrations indicate that HYDRUS-3D adequately simulated SBRE flow and NO(3) transport through the soil domain under a range of environmental and effluent application conditions.

  5. On the origin of reproducible sequential activity in neural circuits

    NASA Astrophysics Data System (ADS)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  6. On the origin of reproducible sequential activity in neural circuits.

    PubMed

    Afraimovich, V S; Zhigulin, V P; Rabinovich, M I

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  7. Fixed precision sampling plans for white apple leafhopper (Homoptera: Cicadellidae) on apple.

    PubMed

    Beers, Elizabeth H; Jones, Vincent P

    2004-10-01

    Constant precision sampling plans for the white apple leafhopper, Typhlocyba pomaria McAtee, were developed so that it could be used as an indicator species for system stability as new integrated pest management programs without broad-spectrum pesticides are developed. Taylor's power law was used to model the relationship between the mean and the variance, and Green's constant precision sequential sample equation was used to develop sampling plans. Bootstrap simulations of the sampling plans showed greater precision (D = 0.25) than the desired precision (Do = 0.3), particularly at low mean population densities. We found that by adjusting the Do value in Green's equation to 0.4, we were able to reduce the average sample number by 25% and provided an average D = 0.31. The sampling plan described allows T. pomaria to be used as reasonable indicator species of agroecosystem stability in Washington apple orchards.

  8. The Unfolding MD Simulations of Cyclophilin: Analyzed by Surface Contact Networks and Their Associated Metrics

    PubMed Central

    Roy, Sourav; Basu, Sankar; Dasgupta, Dipak; Bhattacharyya, Dhananjay; Banerjee, Rahul

    2015-01-01

    Currently, considerable interest exists with regard to the dissociation of close packed aminoacids within proteins, in the course of unfolding, which could result in either wet or dry moltenglobules. The progressive disjuncture of residues constituting the hydrophobic core ofcyclophilin from L. donovani (LdCyp) has been studied during the thermal unfolding of the molecule, by molecular dynamics simulations. LdCyp has been represented as a surface contactnetwork (SCN) based on the surface complementarity (Sm) of interacting residues within themolecular interior. The application of Sm to side chain packing within proteins make it a very sensitive indicator of subtle perturbations in packing, in the thermal unfolding of the protein. Network based metrics have been defined to track the sequential changes in the disintegration ofthe SCN spanning the hydrophobic core of LdCyp and these metrics prove to be highly sensitive compared to traditional metrics in indicating the increased conformational (and dynamical) flexibility in the network. These metrics have been applied to suggest criteria distinguishing DMG, WMG and transition state ensembles and to identify key residues involved in crucial conformational/topological events during the unfolding process. PMID:26545107

  9. A high accuracy sequential solver for simulation and active control of a longitudinal combustion instability

    NASA Technical Reports Server (NTRS)

    Shyy, W.; Thakur, S.; Udaykumar, H. S.

    1993-01-01

    A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.

  10. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    PubMed

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  11. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  12. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  13. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    PubMed

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.

  14. Food matrix effects on in vitro digestion of microencapsulated tuna oil powder.

    PubMed

    Shen, Zhiping; Apriani, Christina; Weerakkody, Rangika; Sanguansri, Luz; Augustin, Mary Ann

    2011-08-10

    Tuna oil, containing 53 mg of eicosapentaenoic acid (EPA) and 241 mg of docosahexaenoic acid (DHA) per gram of oil, delivered as a neat microencapsulated tuna oil powder (25% oil loading) or in food matrices (orange juice, yogurt, or cereal bar) fortified with microencapsulated tuna oil powder was digested in simulated gastric fluid or sequentially in simulated gastric fluid and simulated intestinal fluid. The level of fortification was equivalent to 1 g of tuna oil per recommended serving size (i.e., per 200 g of orange juice or yogurt or 60 g of cereal bar). The changes in particle size of oil droplets during digestion were influenced by the method of delivery of the microencapsulated tuna oil powder. Lipolysis in simulated gastric fluid was low, with only 4.4-6.1% EPA and ≤1.5% DHA released after digestion (as a % of total fatty acids present). After sequential exposure to simulated gastric and intestinal fluids, much higher extents of lipolysis of both glycerol-bound EPA and DHA were obtained (73.2-78.6% for the neat powder, fortified orange juice, and yogurt; 60.3-64.0% for the fortified cereal bar). This research demonstrates that the choice of food matrix may influence the lipolysis of microencapsulated tuna oil.

  15. The Effects of the Previous Outcome on Probabilistic Choice in Rats

    PubMed Central

    Marshall, Andrew T.; Kirkpatrick, Kimberly

    2014-01-01

    This study examined the effects of previous outcomes on subsequent choices in a probabilistic-choice task. Twenty-four rats were trained to choose between a certain outcome (1 or 3 pellets) versus an uncertain outcome (3 or 9 pellets), delivered with a probability of .1, .33, .67, and .9 in different phases. Uncertain outcome choices increased with the probability of uncertain food. Additionally, uncertain choices increased with the probability of uncertain food following both certain-choice outcomes and unrewarded uncertain choices. However, following uncertain-choice food outcomes, there was a tendency to choose the uncertain outcome in all cases, indicating that the rats continued to “gamble” after successful uncertain choices, regardless of the overall probability or magnitude of food. A subsequent manipulation, in which the probability of uncertain food varied within each session as a function of the previous uncertain outcome, examined how the previous outcome and probability of uncertain food affected choice in a dynamic environment. Uncertain-choice behavior increased with the probability of uncertain food. The rats exhibited increased sensitivity to probability changes and a greater degree of win–stay/lose–shift behavior than in the static phase. Simulations of two sequential choice models were performed to explore the possible mechanisms of reward value computations. The simulation results supported an exponentially decaying value function that updated as a function of trial (rather than time). These results emphasize the importance of analyzing global and local factors in choice behavior and suggest avenues for the future development of sequential-choice models. PMID:23205915

  16. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  17. A cost and policy analysis comparing immediate sequential cataract surgery and delayed sequential cataract surgery from the physician perspective in the United States.

    PubMed

    Neel, Sean T

    2014-11-01

    A cost analysis was performed to evaluate the effect on physicians in the United States of a transition from delayed sequential cataract surgery to immediate sequential cataract surgery. Financial and efficiency impacts of this change were evaluated to determine whether efficiency gains could offset potential reduced revenue. A cost analysis using Medicare cataract surgery volume estimates, Medicare 2012 physician cataract surgery reimbursement schedules, and estimates of potential additional office visit revenue comparing immediate sequential cataract surgery with delayed sequential cataract surgery for a single specialty ophthalmology practice in West Tennessee. This model should give an indication of the effect on physicians on a national basis. A single specialty ophthalmology practice in West Tennessee was found to have a cataract surgery revenue loss of $126,000, increased revenue from office visits of $34,449 to $106,271 (minimum and maximum offset methods), and a net loss of $19,900 to $91,700 (base case) with the conversion to immediate sequential cataract surgery. Physicians likely stand to lose financially, and this loss cannot be offset by increased patient visits under the current reimbursement system. This may result in physician resistance to converting to immediate sequential cataract surgery, gaming, and supplier-induced demand.

  18. Considering User's Access Pattern in Multimedia File Systems

    NASA Astrophysics Data System (ADS)

    Cho, KyoungWoon; Ryu, YeonSeung; Won, Youjip; Koh, Kern

    2002-12-01

    Legacy buffer cache management schemes for multimedia server are grounded at the assumption that the application sequentially accesses the multimedia file. However, user access pattern may not be sequential in some circumstances, for example, in distance learning application, where the user may exploit the VCR-like function(rewind and play) of the system and accesses the particular segments of video repeatedly in the middle of sequential playback. Such a looping reference can cause a significant performance degradation of interval-based caching algorithms. And thus an appropriate buffer cache management scheme is required in order to deliver desirable performance even under the workload that exhibits looping reference behavior. We propose Adaptive Buffer cache Management(ABM) scheme which intelligently adapts to the file access characteristics. For each opened file, ABM applies either the LRU replacement or the interval-based caching depending on the Looping Reference Indicator, which indicates that how strong temporally localized access pattern is. According to our experiment, ABM exhibits better buffer cache miss ratio than interval-based caching or LRU, especially when the workload exhibits not only sequential but also looping reference property.

  19. SMA texture and reorientation: simulations and neutron diffraction studies

    NASA Astrophysics Data System (ADS)

    Gao, Xiujie; Brown, Donald W.; Brinson, L. Catherine

    2005-05-01

    With increased usage of shape memory alloys (SMA) for applications in various fields, it is important to understand how the material behavior is affected by factors such as texture, stress state and loading history, especially for complex multiaxial loading states. Using the in-situ neutron diffraction loading facility (SMARTS diffractometer) and ex situ inverse pole figure measurement facility (HIPPO diffractometer) at the Los Alamos Neutron Science Center (LANCE), the macroscopic mechanical behavior and texture evolution of Nickel-Titanium (Nitinol) SMAs under sequential compression in alternating directions were studied. The simplified multivariant model developed at Northwestern University was then used to simulate the macroscopic behavior and the microstructural change of Nitinol under this sequential loading. Pole figures were obtained via post-processing of the multivariant results for volume fraction evolution and compared quantitatively well to the experimental results. The experimental results can also be used to test or verify other SMA constitutive models.

  20. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  1. Dispersion Analysis Using Particle Tracking Simulations Through Heterogeneity Based on Outcrop Lidar Imagery

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.

    2007-12-01

    Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  2. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  3. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  4. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  5. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    NASA Astrophysics Data System (ADS)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  6. Development of a software framework for data assimilation and its applications for streamflow forecasting in Japan

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.

    2012-04-01

    Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.

  7. Simultaneous versus sequential penetrating keratoplasty and cataract surgery.

    PubMed

    Hayashi, Ken; Hayashi, Hideyuki

    2006-10-01

    To compare the surgical outcomes of simultaneous penetrating keratoplasty and cataract surgery with those of sequential surgery. Thirty-nine eyes of 39 patients scheduled for simultaneous keratoplasty and cataract surgery and 23 eyes of 23 patients scheduled for sequential keratoplasty and secondary phacoemulsification surgery were recruited. Refractive error, regular and irregular corneal astigmatism determined by Fourier analysis, and endothelial cell loss were studied at 1 week and 3, 6, and 12 months after combined surgery in the simultaneous surgery group or after subsequent phacoemulsification surgery in the sequential surgery group. At 3 and more months after surgery, mean refractive error was significantly greater in the simultaneous surgery group than in the sequential surgery group, although no difference was seen at 1 week. The refractive error at 12 months was within 2 D of that targeted in 15 eyes (39%) in the simultaneous surgery group and within 2 D in 16 eyes (70%) in the sequential surgery group; the incidence was significantly greater in the sequential group (P = 0.0344). The regular and irregular astigmatism was not significantly different between the groups at 3 and more months after surgery. No significant difference was also found in the percentage of endothelial cell loss between the groups. Although corneal astigmatism and endothelial cell loss were not different, refractive error from target refraction was greater after simultaneous keratoplasty and cataract surgery than after sequential surgery, indicating a better outcome after sequential surgery than after simultaneous surgery.

  8. On the Sequential Negotiation of Identity in Spanish-Language Discourse: Mobilizing Linguistic Resources in the Service of Social Action

    ERIC Educational Resources Information Center

    Raymond, Chase Wesley

    2014-01-01

    This dissertation takes an ethnomethodologically-grounded, conversation-analytic approach in investigating the sequential deployment of linguistic resources in Spanish-language talk-in-interaction. Three sets of resources are examined: 2nd-person singular reference forms (tú, vos, usted), indicative/subjunctive verbal mood selection, and…

  9. Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall

    1998-01-01

    The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.

  10. Statistical Emulator for Expensive Classification Simulators

    NASA Technical Reports Server (NTRS)

    Ross, Jerret; Samareh, Jamshid A.

    2016-01-01

    Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.

  11. Space Station Human Factors: Designing a Human-Robot Interface

    NASA Technical Reports Server (NTRS)

    Rochlis, Jennifer L.; Clarke, John Paul; Goza, S. Michael

    2001-01-01

    The experiments described in this paper are part of a larger joint MIT/NASA research effort and focus on the development of a methodology for designing and evaluating integrated interfaces for highly dexterous and multifunctional telerobot. Specifically, a telerobotic workstation is being designed for an Extravehicular Activity (EVA) anthropomorphic space station telerobot called Robonaut. Previous researchers have designed telerobotic workstations based upon performance of discrete subsets of tasks (for example, peg-in-hole, tracking, etc.) without regard for transitions that operators go through between tasks performed sequentially in the context of larger integrated tasks. The experiments presented here took an integrated approach to describing teleoperator performance and assessed how subjects operating a full-immersion telerobot perform during fine position and gross position tasks. In addition, a Robonaut simulation was also developed as part of this research effort, and experimentally tested against Robonaut itself to determine its utility. Results show that subject performance of teleoperated tasks using both Robonaut and the simulation are virtually identical, with no significant difference between the two. These results indicate that the simulation can be utilized as both a Robonaut training tool, and as a powerful design platform for telepresence displays and aids.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lin; Dai, Zhenxue; Gong, Huili

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  13. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  14. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    PubMed

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  15. Sequential processing of GNSS-R delay-Doppler maps (DDM's) for ocean wind retrieval

    NASA Astrophysics Data System (ADS)

    Garrison, J. L.; Rodriguez-Alvarez, N.; Hoffman, R.; Annane, B.; Leidner, M.; Kaitie, S.

    2016-12-01

    The delay-Doppler map (DDM) is the fundamental data product from GNSS-Reflectometry (GNSS-R), generated by cross-correlating the scattered signal with a local signal model over a range of delays and Doppler frequencies. Delay and Doppler form a set of coordinates on the ocean surface and the shape of the DDM is related to the distribution of ocean slopes. Wind speed can thus be estimated by fitting a scattering model to the shape of the observed DDM or defining an observable (e.g. average power or leading edge slope) which characterizes the change in DDM shape. For spaceborne measurements, the DDM is composed of signals scattered from a glistening zone, which can extend for up to 100 km or more. Setting a reasonable resolution requirement (25 km or less) will limit the usable portion of the DDM at each observation to only a small region near the specular point. Cyclone-GNSS (CYGNSS) is a NASA mission to study developing tropical cyclones using GNSS-R. CYGNSS science requirements call for wind retrieval with an accuracy of 10 percent above 20 m/s within a 25 km resolution. This requirement can be met using an observable defined for DDM samples between +/- 0.25 chips in delay and +/- 1 kHz in Doppler, with some filtering of the observations using a minimum threshold for range corrected gain (RCG). An improved approach, to be reviewed in this presentation, sequentially processes multiple DDM's, to combine observations generated from different "looks" at the same points on the surface. Applying this sequential process to synthetic data indicates a significant improvement in wind retrieval accuracy over a 10 km grid covering a region around the specular point. The attached figure illustrates this improvement, using simulated CYGNSS DDM's generated using the wind fields from hurricanes Earl and Danielle (left). The middle plots show wind retrievals using only an observable defined within the 25 km resolution cell. The plots on the right side show the retrievals from sequential processing of multiple DDM's. Recently, the assimilation of GNSS-R retrievals into weather forecast models has been studied. The authors have begun to investigate the direct assimilation of other data products, such as the DDM itself, or the results of sequential processing.

  16. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  17. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  18. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  19. Influence of Multidimensionality on Convergence of Sampling in Protein Simulation

    NASA Astrophysics Data System (ADS)

    Metsugi, Shoichi

    2005-06-01

    We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

  20. A Molecular Dynamics-Quantum Mechanics Theoretical Study of DNA-Mediated Charge Transport in Hydrated Ionic Liquids.

    PubMed

    Meng, Zhenyu; Kubar, Tomas; Mu, Yuguang; Shao, Fangwei

    2018-05-08

    Charge transport (CT) through biomolecules is of high significance in the research fields of biology, nanotechnology, and molecular devices. Inspired by our previous work that showed the binding of ionic liquid (IL) facilitated charge transport in duplex DNA, in silico simulation is a useful means to understand the microscopic mechanism of the facilitation phenomenon. Here molecular dynamics simulations (MD) of duplex DNA in water and hydrated ionic liquids were employed to explore the helical parameters. Principal component analysis was further applied to capture the subtle conformational changes of helical DNA upon different environmental impacts. Sequentially, CT rates were calculated by a QM/MM simulation of the flickering resonance model based upon MD trajectories. Herein, MD simulation illustrated that the binding of ionic liquids can restrain dynamic conformation and lower the on-site energy of the DNA base. Confined movement among the adjacent base pairs was highly related to the increase of electronic coupling among base pairs, which may lead DNA to a CT facilitated state. Sequentially combining MD and QM/MM analysis, the rational correlations among the binding modes, the conformational changes, and CT rates illustrated the facilitation effects from hydrated IL on DNA CT and supported a conformational-gating mechanism.

  1. Octree-based, GPU implementation of a continuous cellular automaton for the simulation of complex, evolving surfaces

    NASA Astrophysics Data System (ADS)

    Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.

    2011-03-01

    Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.

  2. Accurate Monitoring and Fault Detection in Wind Measuring Devices through Wireless Sensor Networks

    PubMed Central

    Khan, Komal Saifullah; Tariq, Muhammad

    2014-01-01

    Many wind energy projects report poor performance as low as 60% of the predicted performance. The reason for this is poor resource assessment and the use of new untested technologies and systems in remote locations. Predictions about the potential of an area for wind energy projects (through simulated models) may vary from the actual potential of the area. Hence, introducing accurate site assessment techniques will lead to accurate predictions of energy production from a particular area. We solve this problem by installing a Wireless Sensor Network (WSN) to periodically analyze the data from anemometers installed in that area. After comparative analysis of the acquired data, the anemometers transmit their readings through a WSN to the sink node for analysis. The sink node uses an iterative algorithm which sequentially detects any faulty anemometer and passes the details of the fault to the central system or main station. We apply the proposed technique in simulation as well as in practical implementation and study its accuracy by comparing the simulation results with experimental results to analyze the variation in the results obtained from both simulation model and implemented model. Simulation results show that the algorithm indicates faulty anemometers with high accuracy and low false alarm rate when as many as 25% of the anemometers become faulty. Experimental analysis shows that anemometers incorporating this solution are better assessed and performance level of implemented projects is increased above 86% of the simulated models. PMID:25421739

  3. Miniaturizing and automation of free acidity measurements for uranium (VI)-HNO3 solutions: Development of a new sequential injection analysis for a sustainable radio-analytical chemistry.

    PubMed

    Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent

    2016-10-01

    A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Robust sequential working memory recall in heterogeneous cognitive networks

    PubMed Central

    Rabinovich, Mikhail I.; Sokolov, Yury; Kozma, Robert

    2014-01-01

    Psychiatric disorders are often caused by partial heterogeneous disinhibition in cognitive networks, controlling sequential and spatial working memory (SWM). Such dynamic connectivity changes suggest that the normal relationship between the neuronal components within the network deteriorates. As a result, competitive network dynamics is qualitatively altered. This dynamics defines the robust recall of the sequential information from memory and, thus, the SWM capacity. To understand pathological and non-pathological bifurcations of the sequential memory dynamics, here we investigate the model of recurrent inhibitory-excitatory networks with heterogeneous inhibition. We consider the ensemble of units with all-to-all inhibitory connections, in which the connection strengths are monotonically distributed at some interval. Based on computer experiments and studying the Lyapunov exponents, we observed and analyzed the new phenomenon—clustered sequential dynamics. The results are interpreted in the context of the winnerless competition principle. Accordingly, clustered sequential dynamics is represented in the phase space of the model by two weakly interacting quasi-attractors. One of them is similar to the sequential heteroclinic chain—the regular image of SWM, while the other is a quasi-chaotic attractor. Coexistence of these quasi-attractors means that the recall of the normal information sequence is intermittently interrupted by episodes with chaotic dynamics. We indicate potential dynamic ways for augmenting damaged working memory and other cognitive functions. PMID:25452717

  5. The target-to-foils shift in simultaneous and sequential lineups.

    PubMed

    Clark, Steven E; Davey, Sherrie L

    2005-04-01

    A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.

  6. Sequential detection of web defects

    DOEpatents

    Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.

    2001-01-01

    A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.

  7. AEROSOL TRANSPORT AND DEPOSITION IN SEQUENTIALLY BIFURCATING AIRWAYS

    EPA Science Inventory

    Deposition patterns and efficiencies of a dilute suspension of inhaled particles in three-dimensional double bifurcating airway models for both in-plane and 90 deg out-of-plane configurations have been numerically simulated assuming steady, laminar, constant-property air flow wit...

  8. Quantum mechanical/molecular mechanical free energy simulations of the self-cleavage reaction in the hepatitis delta virus ribozyme.

    PubMed

    Ganguly, Abir; Thaplyal, Pallavi; Rosta, Edina; Bevilacqua, Philip C; Hammes-Schiffer, Sharon

    2014-01-29

    The hepatitis delta virus (HDV) ribozyme catalyzes a self-cleavage reaction using a combination of nucleobase and metal ion catalysis. Both divalent and monovalent ions can catalyze this reaction, although the rate is slower with monovalent ions alone. Herein, we use quantum mechanical/molecular mechanical (QM/MM) free energy simulations to investigate the mechanism of this ribozyme and to elucidate the roles of the catalytic metal ion. With Mg(2+) at the catalytic site, the self-cleavage mechanism is observed to be concerted with a phosphorane-like transition state and a free energy barrier of ∼13 kcal/mol, consistent with free energy barrier values extrapolated from experimental studies. With Na(+) at the catalytic site, the mechanism is observed to be sequential, passing through a phosphorane intermediate, with free energy barriers of 2-4 kcal/mol for both steps; moreover, proton transfer from the exocyclic amine of protonated C75 to the nonbridging oxygen of the scissile phosphate occurs to stabilize the phosphorane intermediate in the sequential mechanism. To explain the slower rate observed experimentally with monovalent ions, we hypothesize that the activation of the O2' nucleophile by deprotonation and orientation is less favorable with Na(+) ions than with Mg(2+) ions. To explore this hypothesis, we experimentally measure the pKa of O2' by kinetic and NMR methods and find it to be lower in the presence of divalent ions rather than only monovalent ions. The combined theoretical and experimental results indicate that the catalytic Mg(2+) ion may play three key roles: assisting in the activation of the O2' nucleophile, acidifying the general acid C75, and stabilizing the nonbridging oxygen to prevent proton transfer to it.

  9. Standardization of Freeze Frame TV Codecs

    DTIC Science & Technology

    1990-06-01

    Kodak SV9600 Still Video Transceiver Colorado Video, Inc.286 Digital Transceiver Image Data Corp. CP-200 Photophone Interand Corp. DISCON Imagephone...error recovery Proprietary Proprby retransmission errorIMAGE BUILD-UP Sequential Sequential PHOTOPHONE Video Teleconferenc- DISCON Imaqephone GENERIC...and information transfer is effected among terminals. An indication of the function and power of these commands can be obtained by reviewing Table

  10. The sequential structure of brain activation predicts skill.

    PubMed

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Syndrome Surveillance Using Parametric Space-Time Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KOCH, MARK W.; MCKENNA, SEAN A.; BILISOLY, ROGER L.

    2002-11-01

    As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we havemore » extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.« less

  12. Hydrology, geomorphology, and flood profiles of Lemon Creek, Juneau, Alaska

    USGS Publications Warehouse

    Host, Randy H.; Neal, Edward G.

    2005-01-01

    Lemon Creek near Juneau, Alaska has a history of extensive gravel mining, which straightened and deepened the stream channel in the lower reaches of the study area. Gravel mining and channel excavation began in the 1940s and continued through the mid-1980s. Time sequential aerial photos and field investigations indicate that the channel morphology is reverting to pre-disturbance conditions through aggradation of sediment and re-establishment of braided channels, which may result in decreased channel conveyance and increased flooding potential. Time sequential surveys of selected channel cross sections were conducted in an attempt to determine rates of channel aggradation/degradation throughout three reaches of the study area. In order to assess flooding potential in the lower reaches of the study area the U.S. Army Corps of Engineers Hydrologic Engineering Center River Analysis System model was used to estimate the water-surface elevations for the 2-, 10-, 25-, 50-, and 100-year floods. A regionally based regression equation was used to estimate the magnitude of floods for the selected recurrence intervals. Forty-two cross sections were surveyed to define the hydraulic characteristics along a 1.7-mile reach of the stream. High-water marks from a peak flow of 1,820 cubic feet per second, or about a 5-year flood, were surveyed and used to calibrate the model throughout the study area. The stream channel at a bridge in the lower reach could not be simulated without violating assumptions of the model. A model without the lower bridge indicates flood potential is limited to a small area.

  13. Updating categorical soil maps using limited survey data by Bayesian Markov chain cosimulation.

    PubMed

    Li, Weidong; Zhang, Chuanrong; Dey, Dipak K; Willig, Michael R

    2013-01-01

    Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data.

  14. Updating Categorical Soil Maps Using Limited Survey Data by Bayesian Markov Chain Cosimulation

    PubMed Central

    Dey, Dipak K.; Willig, Michael R.

    2013-01-01

    Updating categorical soil maps is necessary for providing current, higher-quality soil data to agricultural and environmental management but may not require a costly thorough field survey because latest legacy maps may only need limited corrections. This study suggests a Markov chain random field (MCRF) sequential cosimulation (Co-MCSS) method for updating categorical soil maps using limited survey data provided that qualified legacy maps are available. A case study using synthetic data demonstrates that Co-MCSS can appreciably improve simulation accuracy of soil types with both contributions from a legacy map and limited sample data. The method indicates the following characteristics: (1) if a soil type indicates no change in an update survey or it has been reclassified into another type that similarly evinces no change, it will be simply reproduced in the updated map; (2) if a soil type has changes in some places, it will be simulated with uncertainty quantified by occurrence probability maps; (3) if a soil type has no change in an area but evinces changes in other distant areas, it still can be captured in the area with unobvious uncertainty. We concluded that Co-MCSS might be a practical method for updating categorical soil maps with limited survey data. PMID:24027447

  15. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  16. specsim: A Fortran-77 program for conditional spectral simulation in 3D

    NASA Astrophysics Data System (ADS)

    Yao, Tingting

    1998-12-01

    A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.

  17. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  18. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Damiani, Rick R

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  20. Destruction of Spores on Building Decontamination Residue in a Commercial Autoclave▿

    PubMed Central

    Lemieux, P.; Sieber, R.; Osborne, A.; Woodard, A.

    2006-01-01

    The U.S. Environmental Protection Agency conducted an experiment to evaluate the effectiveness of a commercial autoclave for treating simulated building decontamination residue (BDR). The BDR was intended to simulate porous materials removed from a building deliberately contaminated with biological agents such as Bacillus anthracis (anthrax) in a terrorist attack. The purpose of the tests was to assess whether the standard operating procedure for a commercial autoclave provided sufficiently robust conditions to adequately destroy bacterial spores bound to the BDR. In this study we investigated the effects of several variables related to autoclaving BDR, including time, temperature, pressure, item type, moisture content, packing density, packing orientation, autoclave bag integrity, and autoclave process sequence. The test team created simulated BDR from wallboard, ceiling tiles, carpet, and upholstered furniture, and embedded in the BDR were Geobacillus stearothermophilus biological indicator (BI) strips containing 106 spores and thermocouples to obtain time and temperature profile data associated with each BI strip. The results indicated that a single standard autoclave cycle did not effectively decontaminate the BDR. Autoclave cycles consisting of 120 min at 31.5 lb/in2 and 275°F and 75 min at 45 lb/in2 and 292°F effectively decontaminated the BDR material. Two sequential standard autoclave cycles consisting of 40 min at 31.5 lb/in2 and 275°F proved to be particularly effective, probably because the second cycle's evacuation step pulled the condensed water out of the pores of the materials, allowing better steam penetration. The results also indicated that the packing density and material type of the BDR in the autoclave could have a significant impact on the effectiveness of the decontamination process. PMID:17012597

  1. Destruction of spores on building decontamination residue in a commercial autoclave.

    PubMed

    Lemieux, P; Sieber, R; Osborne, A; Woodard, A

    2006-12-01

    The U.S. Environmental Protection Agency conducted an experiment to evaluate the effectiveness of a commercial autoclave for treating simulated building decontamination residue (BDR). The BDR was intended to simulate porous materials removed from a building deliberately contaminated with biological agents such as Bacillus anthracis (anthrax) in a terrorist attack. The purpose of the tests was to assess whether the standard operating procedure for a commercial autoclave provided sufficiently robust conditions to adequately destroy bacterial spores bound to the BDR. In this study we investigated the effects of several variables related to autoclaving BDR, including time, temperature, pressure, item type, moisture content, packing density, packing orientation, autoclave bag integrity, and autoclave process sequence. The test team created simulated BDR from wallboard, ceiling tiles, carpet, and upholstered furniture, and embedded in the BDR were Geobacillus stearothermophilus biological indicator (BI) strips containing 10(6) spores and thermocouples to obtain time and temperature profile data associated with each BI strip. The results indicated that a single standard autoclave cycle did not effectively decontaminate the BDR. Autoclave cycles consisting of 120 min at 31.5 lb/in2 and 275 degrees F and 75 min at 45 lb/in2 and 292 degrees F effectively decontaminated the BDR material. Two sequential standard autoclave cycles consisting of 40 min at 31.5 lb/in2 and 275 degrees F proved to be particularly effective, probably because the second cycle's evacuation step pulled the condensed water out of the pores of the materials, allowing better steam penetration. The results also indicated that the packing density and material type of the BDR in the autoclave could have a significant impact on the effectiveness of the decontamination process.

  2. Sequential biases on subjective judgments: Evidence from face attractiveness and ringtone agreeableness judgment.

    PubMed

    Huang, Jianrui; He, Xianyou; Ma, Xiaojin; Ren, Yian; Zhao, Tingting; Zeng, Xin; Li, Han; Chen, Yiheng

    2018-01-01

    When people make decisions about sequentially presented items in psychophysical experiments, their decisions are always biased by their preceding decisions and the preceding items, either by assimilation (shift towards the decision or item) or contrast (shift away from the decision or item). Such sequential biases also occur in naturalistic and real-world judgments such as facial attractiveness judgments. In this article, we aimed to cast light on the causes of these sequential biases. We first found significant assimilative and contrastive effects in a visual face attractiveness judgment task and an auditory ringtone agreeableness judgment task, indicating that sequential effects are not limited to the visual modality. We then found that the provision of trial-by-trial feedback of the preceding stimulus value eliminated the contrastive effect, but only weakened the assimilative effect. When participants orally reported their judgments rather than indicated them via a keyboard button press, we found a significant diminished assimilative effect, suggesting that motor response repetition strengthened the assimilation bias. Finally, we found that when visual and auditory stimuli were alternated, there was no longer a contrastive effect from the immediately previous trial, but there was an assimilative effect both from the previous trial (cross-modal) and the 2-back trial (same stimulus modality). These findings suggested that the contrastive effect results from perceptual processing, while the assimilative effect results from anchoring of the previous judgment and is strengthened by response repetition and numerical priming.

  3. High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow

    NASA Astrophysics Data System (ADS)

    Savel'ev, A. D.

    2018-02-01

    On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.

  4. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  5. Sequential capture of CO2 and SO2 in a pressurized TGA simulating FBC conditions.

    PubMed

    Sun, Ping; Grace, John R; Lim, C Jim; Anthony, Edward J

    2007-04-15

    Four FBC-based processes were investigated as possible means of sequentially capturing SO2 and CO2. Sorbent performance is the key to their technical feasibility. Two sorbents (a limestone and a dolomite) were tested in a pressurized thermogravimetric analyzer (PTGA). The sorbent behaviors were explained based on complex interaction between carbonation, sulfation, and direct sulfation. The best option involved using limestone or dolomite as a SO2-sorbent in a FBC combustor following cyclic CO2 capture. Highly sintered limestone is a good sorbent for SO2 because of the generation of macropores during calcination/carbonation cycling.

  6. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  7. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  8. Sequential evaporation of water molecules from protonated water clusters: measurement of the velocity distributions of the evaporated molecules and statistical analysis.

    PubMed

    Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D

    2018-06-22

    Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.

  9. Comparison of Sequential and Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  10. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  11. Differentiability of simulated MEG hippocampal, medial temporal and neocortical temporal epileptic spike activity.

    PubMed

    Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J

    2005-12-01

    Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.

  12. The Application of Neutron Transport Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.

    2015-02-01

    Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.

  13. Clinical reasoning in unimodal interventions in patients with non-specific neck pain in daily physiotherapy practice, a Delphi study.

    PubMed

    Maissan, Francois; Pool, Jan; Stutterheim, Eric; Wittink, Harriet; Ostelo, Raymond

    2018-06-02

    Neck pain is the fourth major cause of disability worldwide but sufficient evidence regarding treatment is not available. This study is a first exploratory attempt to gain insight into and consensus on the clinical reasoning of experts in patients with non-specific neck pain. First, we aimed to inventory expert opinions regarding the indication for physiotherapy when, other than neck pain, no positive signs and symptoms and no positive diagnostic tests are present. Secondly, we aimed to determine which measurement instruments are being used and when they are used to support and objectify the clinical reasoning process. Finally, we wanted to establish consensus among experts regarding the use of unimodal interventions in patients with non-specific neck pain, i.e. their sequential linear clinical reasoning. A Delphi study. A Web-based Delphi study was conducted. Fifteen experts (teachers and researchers) participated. Pain alone was deemed not be an indication for physiotherapy treatment. PROMs are mainly used for evaluative purposes and physical tests for diagnostic and evaluative purposes. Eighteen different variants of sequential linear clinical reasoning were investigated within our Delphi study. Only 6 out of 18 variants of sequential linear clinical reasoning reached more than 50% consensus. Pain alone is not an indication for physiotherapy. Insight has been obtained into which measurement instruments are used and when they are used. Consensus about sequential linear lines of clinical reasoning was poor. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  15. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  16. Footprints of electron correlation in strong-field double ionization of Kr close to the sequential-ionization regime

    NASA Astrophysics Data System (ADS)

    Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun

    2017-09-01

    By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.

  17. Streaming current for particle-covered surfaces: simulations and experiments

    NASA Astrophysics Data System (ADS)

    Blawzdziewicz, Jerzy; Adamczyk, Zbigniew; Ekiel-Jezewska, Maria L.

    2017-11-01

    Developing in situ methods for assessment of surface coverage by adsorbed nanoparticles is crucial for numerous technological processes, including controlling protein deposition and fabricating diverse microstructured materials (e.g., antibacterial coatings, catalytic surfaces, and particle-based optical systems). For charged surfaces and particles, promising techniques for evaluating surface coverage are based on measurements of the electrokinetic streaming current associated with ion convection in the double-layer region. We have investigated the dependence of the streaming current on the area fraction of adsorbed particles for equilibrium and random-sequential-adsorption (RSA) distributions of spherical particles, and for periodic square and hexagonal sphere arrays. The RSA results have been verified experimentally. Our numerical results indicate that the streaming current weakly depends on the microstructure of the particle monolayer. Combining simulations with the virial expansion, we provide convenient fitting formulas for the particle and surface contributions to the streaming current as functions of area fractions. For particles that have the same ζ-potential as the surface, we find that surface roughness reduces the streaming current. Supported by NSF Award No. 1603627.

  18. Parallel algorithms for islanded microgrid with photovoltaic and energy storage systems planning optimization problem: Material selection and quantity demand optimization

    NASA Astrophysics Data System (ADS)

    Cao, Yang; Liu, Chun; Huang, Yuehui; Wang, Tieqiang; Sun, Chenjun; Yuan, Yue; Zhang, Xinsong; Wu, Shuyun

    2017-02-01

    With the development of roof photovoltaic power (PV) generation technology and the increasingly urgent need to improve supply reliability levels in remote areas, islanded microgrid with photovoltaic and energy storage systems (IMPE) is developing rapidly. The high costs of photovoltaic panel material and energy storage battery material have become the primary factors that hinder the development of IMPE. The advantages and disadvantages of different types of photovoltaic panel materials and energy storage battery materials are analyzed in this paper, and guidance is provided on material selection for IMPE planners. The time sequential simulation method is applied to optimize material demands of the IMPE. The model is solved by parallel algorithms that are provided by a commercial solver named CPLEX. Finally, to verify the model, an actual IMPE is selected as a case system. Simulation results on the case system indicate that the optimization model and corresponding algorithm is feasible. Guidance for material selection and quantity demand for IMPEs in remote areas is provided by this method.

  19. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  20. Substructure hybrid testing of reinforced concrete shear wall structure using a domain overlapping technique

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Pan, Peng; Gong, Runhua; Wang, Tao; Xue, Weichen

    2017-10-01

    An online hybrid test was carried out on a 40-story 120-m high concrete shear wall structure. The structure was divided into two substructures whereby a physical model of the bottom three stories was tested in the laboratory and the upper 37 stories were simulated numerically using ABAQUS. An overlapping domain method was employed for the bottom three stories to ensure the validity of the boundary conditions of the superstructure. Mixed control was adopted in the test. Displacement control was used to apply the horizontal displacement, while two controlled force actuators were applied to simulate the overturning moment, which is very large and cannot be ignored in the substructure hybrid test of high-rise buildings. A series of tests with earthquake sources of sequentially increasing intensities were carried out. The test results indicate that the proposed hybrid test method is a solution to reproduce the seismic response of high-rise concrete shear wall buildings. The seismic performance of the tested precast high-rise building satisfies the requirements of the Chinese seismic design code.

  1. Assimilating Remote Sensing Observations of Leaf Area Index and Soil Moisture for Wheat Yield Estimates: An Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Nearing, Grey S.; Crow, Wade T.; Thorp, Kelly R.; Moran, Mary S.; Reichle, Rolf H.; Gupta, Hoshin V.

    2012-01-01

    Observing system simulation experiments were used to investigate ensemble Bayesian state updating data assimilation of observations of leaf area index (LAI) and soil moisture (theta) for the purpose of improving single-season wheat yield estimates with the Decision Support System for Agrotechnology Transfer (DSSAT) CropSim-Ceres model. Assimilation was conducted in an energy-limited environment and a water-limited environment. Modeling uncertainty was prescribed to weather inputs, soil parameters and initial conditions, and cultivar parameters and through perturbations to model state transition equations. The ensemble Kalman filter and the sequential importance resampling filter were tested for the ability to attenuate effects of these types of uncertainty on yield estimates. LAI and theta observations were synthesized according to characteristics of existing remote sensing data, and effects of observation error were tested. Results indicate that the potential for assimilation to improve end-of-season yield estimates is low. Limitations are due to a lack of root zone soil moisture information, error in LAI observations, and a lack of correlation between leaf and grain growth.

  2. Bacteria and chocolate: a successful combination for probiotic delivery.

    PubMed

    Possemiers, S; Marzorati, M; Verstraete, W; Van de Wiele, T

    2010-06-30

    In this work, chocolate has been evaluated as a potential protective carrier for oral delivery of a microencapsulated mixture of Lactobacillus helveticus CNCM I-1722 and Bifidobacterium longum CNCM I-3470. A sequential in vitro setup was used to evaluate the protection of the probiotics during passage through the stomach and small intestine, when embedded in dark and milk chocolate or liquid milk. Both chocolates offered superior protection (91% and 80% survival in milk chocolate for L. helveticus and B. longum, respectively compared to 20% and 31% found in milk). To simulate long-term administration, the Simulator of the Human Intestinal Microbial Ecosystem (SHIME) was used. Plate counts, Denaturing Gradient Gel Electrophoresis and quantitative PCR showed that the two probiotics successfully reached the simulated colon compartments. This led to an increase in lactobacilli and bifidobacteria counts and the appearance of additional species in the fingerprints. These data indicate that the coating of the probiotics in chocolate is an excellent solution to protect them from environmental stress conditions and for optimal delivery. The simulation with our gastrointestinal model showed that the formulation of a probiotic strain in a specific food matrix could offer superior protection for the delivery of the bacterium into the colon. The chocolate example could act as a trigger for new research to identify new balanced matrices. 2010 Elsevier B.V. All rights reserved.

  3. Response of a tethered aerostat to simulated turbulence

    NASA Astrophysics Data System (ADS)

    Stanney, Keith A.; Rahn, Christopher D.

    2006-09-01

    Aerostats are lighter-than-air vehicles tethered to the ground by a cable and used for broadcasting, communications, surveillance, and drug interdiction. The dynamic response of tethered aerostats subject to extreme atmospheric turbulence often dictates survivability. This paper develops a theoretical model that predicts the planar response of a tethered aerostat subject to atmospheric turbulence and simulates the response to 1000 simulated hurricane scale turbulent time histories. The aerostat dynamic model assumes the aerostat hull to be a rigid body with non-linear fluid loading, instantaneous weathervaning for planar response, and a continuous tether. Galerkin's method discretizes the coupled aerostat and tether partial differential equations to produce a non-linear initial value problem that is integrated numerically given initial conditions and wind inputs. The proper orthogonal decomposition theorem generates, based on Hurricane Georges wind data, turbulent time histories that possess the sequential behavior of actual turbulence, are spectrally accurate, and have non-Gaussian density functions. The generated turbulent time histories are simulated to predict the aerostat response to severe turbulence. The resulting probability distributions for the aerostat position, pitch angle, and confluence point tension predict the aerostat behavior in high gust environments. The dynamic results can be up to twice as large as a static analysis indicating the importance of dynamics in aerostat modeling. The results uncover a worst case wind input consisting of a two-pulse vertical gust.

  4. Preliminary results of sequential monitoring of simulated clandestine graves in Colombia, South America, using ground penetrating radar and botany.

    PubMed

    Molina, Carlos Martin; Pringle, Jamie K; Saumett, Miguel; Hernández, Orlando

    2015-03-01

    In most Latin American countries there are significant numbers of missing people and forced disappearances, 68,000 alone currently in Colombia. Successful detection of shallow buried human remains by forensic search teams is difficult in varying terrain and climates. This research has created three simulated clandestine burial styles at two different depths commonly encountered in Latin America to gain knowledge of optimum forensic geophysics detection techniques. Repeated monitoring of the graves post-burial was undertaken by ground penetrating radar. Radar survey 2D profile results show reasonable detection of ½ clothed pig cadavers up to 19 weeks of burial, with decreasing confidence after this time. Simulated burials using skeletonized human remains were not able to be imaged after 19 weeks of burial, with beheaded and burnt human remains not being able to be detected throughout the survey period. Horizontal radar time slices showed good early results up to 19 weeks of burial as more area was covered and bi-directional surveys were collected, but these decreased in amplitude over time. Deeper burials were all harder to image than shallower ones. Analysis of excavated soil found soil moisture content almost double compared to those reported from temperate climate studies. Vegetation variations over the simulated graves were also noted which would provide promising indicators for grave detection. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. Monitoring and identification of spatiotemporal landscape changes in multiple remote sensing images by using a stratified conditional Latin hypercube sampling approach and geostatistical simulation.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Huang, Yu-Long; Tang, Chia-Hsi; Rouhani, Shahrokh

    2011-06-01

    This study develops a stratified conditional Latin hypercube sampling (scLHS) approach for multiple, remotely sensed, normalized difference vegetation index (NDVI) images. The objective is to sample, monitor, and delineate spatiotemporal landscape changes, including spatial heterogeneity and variability, in a given area. The scLHS approach, which is based on the variance quadtree technique (VQT) and the conditional Latin hypercube sampling (cLHS) method, selects samples in order to delineate landscape changes from multiple NDVI images. The images are then mapped for calibration and validation by using sequential Gaussian simulation (SGS) with the scLHS selected samples. Spatial statistical results indicate that in terms of their statistical distribution, spatial distribution, and spatial variation, the statistics and variograms of the scLHS samples resemble those of multiple NDVI images more closely than those of cLHS and VQT samples. Moreover, the accuracy of simulated NDVI images based on SGS with scLHS samples is significantly better than that of simulated NDVI images based on SGS with cLHS samples and VQT samples, respectively. However, the proposed approach efficiently monitors the spatial characteristics of landscape changes, including the statistics, spatial variability, and heterogeneity of NDVI images. In addition, SGS with the scLHS samples effectively reproduces spatial patterns and landscape changes in multiple NDVI images.

  6. Mechanism of pKID/KIX Association Studied by Molecular Dynamics Free Energy Simulations.

    PubMed

    Bomblies, Rainer; Luitz, Manuel P; Zacharias, Martin

    2016-08-25

    The phosphorylated kinase-inducible domain (pKID) associates with the kinase interacting domain (KIX) via a coupled folding and binding mechanism. The pKID domain is intrinsically disordered when unbound and upon phosphorylation at Ser133 binds to the KIX domain adopting a well-defined kinked two-helix structure. In order to identify putative hot spot residues of binding that could serve as an initial stable anchor, we performed in silico alanine scanning free energy simulations. The simulations indicate that charged residues including the phosphorylated central Ser133 of pKID make significant contributions to binding. However, these are of slightly smaller magnitude compared to several hydrophobic side chains not defining a single dominant binding hot spot. Both continuous molecular dynamics (MD) simulations and free energy analysis demonstrate that phosphorylation significantly stabilizes the central kinked motif around Ser133 of pKID and shifts the conformational equilibrium toward the bound conformation already in the absence of KIX. This result supports a view that pKID/KIX association follows in part a conformational selection process. During a 1.5 μs explicit solvent MD simulation, folding of pKID on the surface of KIX was observed after an initial contact at the bound position of the phosphorylation site was enforced following a sequential process of αA helix association and a stepwise association and folding of the second αB helix compatible with available experimental results.

  7. Configural and component processing in simultaneous and sequential lineup procedures.

    PubMed

    Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep

    2016-01-01

    Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.

  8. Rise and fall of political complexity in island South-East Asia and the Pacific.

    PubMed

    Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth

    2010-10-14

    There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.

  9. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults.

    PubMed

    Tait, Jamie L; Duckham, Rachel L; Milte, Catherine M; Main, Luana C; Daly, Robin M

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people.

  10. Operative air temperature data for different measures applied on a building envelope in warm climate.

    PubMed

    Baglivo, Cristina; Congedo, Paolo Maria

    2018-04-01

    Several technical combinations have been evaluated in order to design high energy performance buildings for the warm climate. The analysis has been developed in several steps, avoiding the use of HVAC systems. The methodological approach of this study is based on a sequential search technique and it is shown on the paper entitled "Envelope Design Optimization by Thermal Modeling of a Building in a Warm Climate" [1]. The Operative Air Temperature trends (TOP), for each combination, have been plotted through a dynamic simulation performed using the software TRNSYS 17 (a transient system simulation program, University of Wisconsin, Solar Energy Laboratory, USA, 2010). Starting from the simplest building configuration consisting of 9 rooms (equal-sized modules of 5 × 5 m 2 ), the different building components are sequentially evaluated until the envelope design is optimized. The aim of this study is to perform a step-by-step simulation, simplifying as much as possible the model without making additional variables that can modify their performances. Walls, slab-on-ground floor, roof, shading and windows are among the simulated building components. The results are shown for each combination and evaluated for Brindisi, a city in southern Italy having 1083 degrees day, belonging to the national climatic zone C. The data show the trends of the TOP for each measure applied in the case study for a total of 17 combinations divided into eight steps.

  11. Characteristics of sequential targeting of brain glioma for transferrin-modified cisplatin liposome.

    PubMed

    Lv, Qing; Li, Li-Min; Han, Min; Tang, Xin-Jiang; Yao, Jin-Na; Ying, Xiao-Ying; Li, Fan-Zhu; Gao, Jian-Qing

    2013-02-28

    Methods on how to improve the sequential targeting of glioma subsequent to passing of drug through the blood-brain barrier (BBB) have been occasionally reported. However, the characteristics involved are poorly understood. In the present study, cisplatin (Cis) liposome (lipo) was modified with transferrin (Tf) to investigate the characteristics of potential sequential targeting to glioma. In bEnd3/C6 co-culture BBB models, higher transport efficiency across the BBB and cytotoxicity in basal C6 cells induced by Cis-lipo(Tf) than Cis-lipo and Cis-solution, suggest its sequential targeting effect. Interestingly, similar liposomal morphology as that of donor compartment was first demonstrated in the receptor solution of BBB models. Meanwhile, a greater acquisition in the lysosome of bEnd3, distributed sequentially into the nucleus of C6 cells were found for the Cis-lipo(Tf). Pre-incubation of chlorpromazine and Tf inhibited this process, indicating that a clathrin-dependent endocytosis is involved in the transport of Cis-lipo(Tf) across the BBB. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Estimating the reliability of eyewitness identifications from police lineups

    PubMed Central

    Wixted, John T.; Mickes, Laura; Dunn, John C.; Clark, Steven E.; Wells, William

    2016-01-01

    Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure. PMID:26699467

  13. Estimating the reliability of eyewitness identifications from police lineups.

    PubMed

    Wixted, John T; Mickes, Laura; Dunn, John C; Clark, Steven E; Wells, William

    2016-01-12

    Laboratory-based mock crime studies have often been interpreted to mean that (i) eyewitness confidence in an identification made from a lineup is a weak indicator of accuracy and (ii) sequential lineups are diagnostically superior to traditional simultaneous lineups. Largely as a result, juries are increasingly encouraged to disregard eyewitness confidence, and up to 30% of law enforcement agencies in the United States have adopted the sequential procedure. We conducted a field study of actual eyewitnesses who were assigned to simultaneous or sequential photo lineups in the Houston Police Department over a 1-y period. Identifications were made using a three-point confidence scale, and a signal detection model was used to analyze and interpret the results. Our findings suggest that (i) confidence in an eyewitness identification from a fair lineup is a highly reliable indicator of accuracy and (ii) if there is any difference in diagnostic accuracy between the two lineup formats, it likely favors the simultaneous procedure.

  14. Detection, mapping and estimation of rate of spread of grass fires from southern African ERTS-1 imagery

    NASA Technical Reports Server (NTRS)

    Wightman, J. M.

    1973-01-01

    Sequential band-6 imagery of the Zambesi Basin of southern Africa recorded substantial changes in burn patterns resulting from late dry season grass fires. One example from northern Botswana, indicates that a fire consumed approximately 70 square miles of grassland over a 24-hour period. Another example from western Zambia indicates increased fire activity over a 19-day period. Other examples clearly define the area of widespread grass fires in Angola, Botswana, Rhodesia and Zambia. From the fire patterns visible on the sequential portions of the imagery, and the time intervals involved, the rates of spread of the fires are estimated and compared with estimates derived from experimental burning plots in Zambia and Canada. It is concluded that sequential ERTS-1 imagery, of the quality studied, clearly provides the information needed to detect and map grass fires and to monitor their rates of spread in this region during the late dry season.

  15. Proposed hardware architectures of particle filter for object tracking

    NASA Astrophysics Data System (ADS)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  16. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  17. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  18. An Overview of the State of the Art in Atomistic and Multiscale Simulation of Fracture

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Yamakov, Vesselin; Phillips, Dawn R.; Glaessgen, Edward H.

    2009-01-01

    The emerging field of nanomechanics is providing a new focus in the study of the mechanics of materials, particularly in simulating fundamental atomic mechanisms involved in the initiation and evolution of damage. Simulating fundamental material processes using first principles in physics strongly motivates the formulation of computational multiscale methods to link macroscopic failure to the underlying atomic processes from which all material behavior originates. This report gives an overview of the state of the art in applying concurrent and sequential multiscale methods to analyze damage and failure mechanisms across length scales.

  19. Comparison of sequential left internal thoracic artery grafting and separate left internal thoracic artery and venous grafting : A 5-year follow-up.

    PubMed

    Wendt, D; Schmidt, D; Wasserfuhr, D; Osswald, B; Thielmann, M; Tossios, P; Kühl, H; Jakob, H; Massoudy, P

    2010-09-01

    The superiority of left internal thoracic artery (LITA) grafting to the left anterior descending artery (LAD) is well established. Patency rates of 80%-90% have been reported at 10-year follow-up. However, the superiority of sequential LITA grafting has not been proven. Our aim was to compare patency rates after sequential LITA grafting to a diagonal branch and the LAD with patency rates of LITA grafting to the LAD and separate vein grafting to a diagonal branch. A total of 58 coronary artery bypass graft (CABG) patients, operated on between 01/2000 and 12/2002, underwent multi-slice computed tomography (MSCT) between 2006 and 2008. Of these patients, 29 had undergone sequential LITA grafting to a diagonal branch and to the LAD ("Sequential" Group), while in 29 the LAD and a diagonal branch were separately grafted with LITA and vein ("Separate" Group). Patencies of all anastomoses were investigated. Mean follow-up was 1958±208 days. The patency rate of the LAD anastomosis was 100% in the Sequential Group and 93% in the Separate Group (p=0.04). The patency rate of the diagonal branch anastomosis was 100% in the Sequential Group and 89% in the Separate Group (p=0.04). Mean intraoperative flow on LITA graft was not different between groups (69±8ml/min in the Sequential Group and 68±9ml/min in the Separate Group, p=n.s.). Patency rates of both the LAD and the diagonal branch anastomoses were higher after sequential arterial grafting compared with separate arterial and venous grafting at 5-year follow-up. This indicates that, with regard to the antero-lateral wall of the left ventricle, there is an advantage to sequential arterial grafting compared with separate arterial and venous grafting.

  20. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    PubMed

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  1. Statistic inversion of multi-zone transition probability models for aquifer characterization in alluvial fans

    DOE PAGES

    Zhu, Lin; Dai, Zhenxue; Gong, Huili; ...

    2015-06-12

    Understanding the heterogeneity arising from the complex architecture of sedimentary sequences in alluvial fans is challenging. This study develops a statistical inverse framework in a multi-zone transition probability approach for characterizing the heterogeneity in alluvial fans. An analytical solution of the transition probability matrix is used to define the statistical relationships among different hydrofacies and their mean lengths, integral scales, and volumetric proportions. A statistical inversion is conducted to identify the multi-zone transition probability models and estimate the optimal statistical parameters using the modified Gauss–Newton–Levenberg–Marquardt method. The Jacobian matrix is computed by the sensitivity equation method, which results in anmore » accurate inverse solution with quantification of parameter uncertainty. We use the Chaobai River alluvial fan in the Beijing Plain, China, as an example for elucidating the methodology of alluvial fan characterization. The alluvial fan is divided into three sediment zones. In each zone, the explicit mathematical formulations of the transition probability models are constructed with optimized different integral scales and volumetric proportions. The hydrofacies distributions in the three zones are simulated sequentially by the multi-zone transition probability-based indicator simulations. Finally, the result of this study provides the heterogeneous structure of the alluvial fan for further study of flow and transport simulations.« less

  2. Detailed Modelling of Kinetic Biodegradation Processes in a Laboratory Mmicrocosm

    NASA Astrophysics Data System (ADS)

    Watson, I.; Oswald, S.; Banwart, S.; Mayer, U.

    2003-04-01

    Biodegradation of organic contaminants in soil and groundwater usually takes places via different redox processes happening sequentially as well as simultaneously. We used numerical modelling of a long-term lab microcosm experiment to simulate the dynamic behaviour of fermentation and respiration in the aqueous phase in contact with the sandstone material, and to develop a conceptual model describing these processes. Aqueous speciation, surface complexation, mineral dissolution and precipitation were taken into account also. Fermentation can be the first step of the degradation process producing intermediate species, which are subsequently consumed by TEAPs. Microbial growth and substrate utilisation kinetics are coupled via a formulation that also includes aqueous speciation and other geochemical reactions including surface complexation, mineral dissolution and precipitation. Competitive exclusion between TEAPs is integral to the conceptual model of the simulation, and the results indicate that exclusion is not complete, but some overlap is found between TEAPs. The model was used to test approaches like the partial equilibrium approach that currently make use of hydrogen levels to diagnose prevalent TEAPs in groundwater. The observed pattern of hydrogen and acetate concentrations were reproduced well by the simulations, and the results show the relevance of kinetics, lag times and inhibition, and especially that intermediate products play a key role.

  3. Formation of fivefold deformation twins in nanocrystalline face-centered-cubic copper based on molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, A. J.; Wei, Y. G.

    2006-07-24

    Fivefold deformation twins were reported recently to be observed in the experiment of the nanocrystalline face-centered-cubic metals and alloys. However, they were not predicted previously based on the molecular dynamics (MD) simulations and the reason was thought to be a uniaxial tension considered in the simulations. In the present investigation, through introducing pretwins in grain regions, using the MD simulations, the authors predict out the fivefold deformation twins in the grain regions of the nanocrystal grain cell, which undergoes a uniaxial tension. It is shown in their simulation results that series of Shockley partial dislocations emitted from grain boundaries providemore » sequential twining mechanism, which results in fivefold deformation twins.« less

  4. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  5. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  6. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  7. Discrete filtering techniques applied to sequential GPS range measurements

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  8. Determination of the Pb, Cr, and Cd distribution patterns with various chlorine additives in the bottom ashes of a low-temperature two-stage fluidized bed incinerator by chemical sequential extraction.

    PubMed

    Peng, Tzu-Huan; Lin, Chiou-Liang; Wey, Ming-Yen

    2015-09-15

    A novel low-temperature two-stage fluidized bed (LTTSFB) incinerator has been successfully developed to control heavy-metal emissions during municipal solid waste (MSW) treatment. However, the characteristics of the residual metal patterns during this process are still unclear. The aim of this study was to investigate the metal patterns in the different partitions of the LTTSFB bottom ash by chemical sequential extraction. Artificial waste was used to simulate the MSW. Different parameters including the first-stage temperature, chloride additives, and operating gas velocity were also considered. Results indicated that during the low-temperature treatment process, a high metal mobility phase exists in the first-stage sand bed. The main patterns of Cd, Pb, and Cr observed were the water-soluble, exchangeable, and residual forms, respectively. With the different Cl additives, the results showed that polyvinyl chloride addition increased metal mobility in the LTTSFB bottom ash, while, sodium chloride addition may have reduced metal mobility due to the formation of eutectic material. The second-stage sand bed was found to have a lower risk of metal leaching. The results also suggested that, the residual ashes produced by the LTTSFB system must be taken into consideration given their high metal mobility. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Magnetometer-only attitude and angular velocity filtering estimation for attitude changing spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Hongliang; Xu, Shijie

    2014-09-01

    This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.

  10. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  11. Observer Training Revisited: A Comparison of in Vivo and Video Instruction

    ERIC Educational Resources Information Center

    Dempsey, Carrie M.; Iwata, Brian A.; Fritz, Jennifer N.; Rolider, Natalie U.

    2012-01-01

    We compared the effects of 2 observer-training procedures. In vivo training involved practice during actual treatment sessions. Video training involved practice while watching progressively more complex simulations. Fifty-nine undergraduate students entered 1 of the 2 training conditions sequentially according to an ABABAB design. Results showed…

  12. Say again? How complexity and format of air traffic control instructions affect pilot recall

    DOT National Transportation Integrated Search

    1999-01-01

    This study compared the recall of ATC information presented in cither grouped or sequential format : in a part-task simulation. It also tested the effect of complexity of ATC clearances on recall, that is, : how many pieces of information a single tr...

  13. AN IN VITRO GASTROINTESTINAL METHOD TO ESTIMATE BIOAVAILABLE ARSENIC IN CONTAMINATED SOILS AND SOLID MEDIA. (R825410)

    EPA Science Inventory

    A method was developed to simulate the human gastrointestinal environment and
    to estimate bioavailability of arsenic in contaminated soil and solid media. In
    this in vitro gastrointestinal (IVG) method, arsenic is sequentially extracted
    from contaminated soil with ...

  14. Characteristics of sequential swallowing of liquids in young and elderly adults: an integrative review.

    PubMed

    Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves

    2012-01-01

    To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.

  15. [Bilateral cochlear implants in children: acquisition of binaural hearing].

    PubMed

    Ramos-Macías, Angel; Deive-Maggiolo, Leopoldo; Artiles-Cabrera, Ovidio; González-Aguado, Rocío; Borkoski-Barreiro, Silvia A; Masgoret-Palau, Elizabeth; Falcón-González, Juan C; Bueno-Yanes, Jorge

    2013-01-01

    Several studies have indicated the benefit of bilateral cochlear implants in the acquisition of binaural hearing and bilateralism. In children with cochlear implants, is it possible to achieve binaurality after a second implant? When is the ideal time to implant them? The objective of this study was to analyse the binaural effect in children with bilateral implants and the differences between subjects with simultaneous and sequential implants with both short and long intervals. There were 90 patients between 1 and 2 years of age (the first surgery), implanted between 2000 and 2008. Of these, 25 were unilateral users and 65 bilateral; 17 patients had received simultaneous implants, 29 had sequential implants before 12 months after the first one (short interimplant period) and 19 after 12 months (long period). All of them were tested for silent and noisy verbal perception and a tonal threshold audiometry was performed. The silent perception test showed that the simultaneous and short period sequential implant patients (mean: 84.67%) versus unilateral and long period sequential implants (mean: 79.66%), had a statistically-significant difference (P=0,23). Likewise, the noisy perception test showed a difference with statistical significance (P=0,22) comparing the simultaneous implanted and short period sequential implants (mean, 77.17%) versus unilateral implanted and long period sequential ones (mean: 69.32%). The simultaneous and sequential short period implants acquired the advantages of binaural hearing. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  16. Persistence of opinion in the Sznajd consensus model: computer simulation

    NASA Astrophysics Data System (ADS)

    Stauffer, D.; de Oliveira, P. M. C.

    2002-12-01

    The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.

  17. Class of cooperative stochastic models: Exact and approximate solutions, simulations, and experiments using ionic self-assembly of nanoparticles.

    PubMed

    Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M

    2016-03-01

    We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.

  18. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  19. Mapping iron oxides and the color of Australian soil using visible-near-infrared reflectance spectra

    NASA Astrophysics Data System (ADS)

    Viscarra Rossel, R. A.; Bui, E. N.; de Caritat, P.; McKenzie, N. J.

    2010-12-01

    Iron (Fe) oxide mineralogy in most Australian soils is poorly characterized, even though Fe oxides play an important role in soil function. Fe oxides reflect the conditions of pH, redox potential, moisture, and temperature in the soil environment. The strong pigmenting effect of Fe oxides gives most soils their color, which is largely a reflection of the soil's Fe mineralogy. Visible-near-infrared (vis-NIR) spectroscopy can be used to identify and measure the abundance of certain Fe oxides in soil, and the visible range can be used to derive tristimuli soil color information. The aims of this paper are (1) to measure the abundance of hematite and goethite in Australian soils from their vis-NIR spectra, (2) to compare these results to measurements of soil color, and (3) to describe the spatial variability of hematite, goethite, and soil color and map their distribution across Australia. We measured the spectra of 4606 surface soil samples from across Australia using a vis-NIR spectrometer with a wavelength range of 350-2500 nm. We determined the Fe oxide abundance for each sample using the diagnostic absorption features of hematite (near 880 nm) and goethite (near 920 nm) and derived a normalized iron oxide difference index (NIODI) to better discriminate between them. The NIODI was generalized across Australia with its spatial uncertainty using sequential indicator simulation, which resulted in a map of the probability of the occurrence of hematite and goethite. We also derived soil RGB color from the spectra and mapped its distribution and uncertainty across the country using sequential Gaussian simulations. The simulated RGB color values were made into a composite true color image and were also converted to Munsell hue, value, and chroma. These color maps were compared to the map of the NIODI, and both were used to interpret our results. The work presented here was validated by randomly splitting the data into training and test data sets, as well as by comparing our results to existing studies on the distribution of Fe oxides in Australian soils.

  20. A Methodology for the Assessment of Unconventional (Continuous) Resources with an Application to the Greater Natural Buttes Gas Field, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olea, Ricardo A., E-mail: olea@usgs.gov; Cook, Troy A.; Coleman, James L.

    2010-12-15

    The Greater Natural Buttes tight natural gas field is an unconventional (continuous) accumulation in the Uinta Basin, Utah, that began production in the early 1950s from the Upper Cretaceous Mesaverde Group. Three years later, production was extended to the Eocene Wasatch Formation. With the exclusion of 1100 non-productive ('dry') wells, we estimate that the final recovery from the 2500 producing wells existing in 2007 will be about 1.7 trillion standard cubic feet (TSCF) (48.2 billion cubic meters (BCM)). The use of estimated ultimate recovery (EUR) per well is common in assessments of unconventional resources, and it is one of themore » main sources of information to forecast undiscovered resources. Each calculated recovery value has an associated drainage area that generally varies from well to well and that can be mathematically subdivided into elemental subareas of constant size and shape called cells. Recovery per 5-acre cells at Greater Natural Buttes shows spatial correlation; hence, statistical approaches that ignore this correlation when inferring EUR values for untested cells do not take full advantage of all the information contained in the data. More critically, resulting models do not match the style of spatial EUR fluctuations observed in nature. This study takes a new approach by applying spatial statistics to model geographical variation of cell EUR taking into account spatial correlation and the influence of fractures. We applied sequential indicator simulation to model non-productive cells, while spatial mapping of cell EUR was obtained by applying sequential Gaussian simulation to provide multiple versions of reality (realizations) having equal chances of being the correct model. For each realization, summation of EUR in cells not drained by the existing wells allowed preparation of a stochastic prediction of undiscovered resources, which range between 2.6 and 3.4 TSCF (73.6 and 96.3 BCM) with a mean of 2.9 TSCF (82.1 BCM) for Greater Natural Buttes. A second approach illustrates the application of multiple-point simulation to assess a hypothetical frontier area for which there is no production information but which is regarded as being similar to Greater Natural Buttes.« less

  1. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  2. Heterogeneous Suppression of Sequential Effects in Random Sequence Generation, but Not in Operant Learning.

    PubMed

    Shteingart, Hanan; Loewenstein, Yonatan

    2016-01-01

    There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants' choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when generating the "random" sequences.

  3. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes.

    PubMed

    Karacan, C Özgen; Olea, Ricardo A

    2013-04-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.

  4. Dietary fibers from mushroom Sclerotia: 2. In vitro mineral binding capacity under sequential simulated physiological conditions of the human gastrointestinal tract.

    PubMed

    Wong, Ka-Hing; Cheung, Peter C K

    2005-11-30

    The in vitro mineral binding capacity of three novel dietary fibers (DFs) prepared from mushroom sclerotia, namely, Pleurotus tuber-regium, Polyporous rhinocerus, and Wolfiporia cocos, to Ca, Mg, Cu, Fe, and Zn under sequential simulated physiological conditions of the human stomach, small intestine, and colon was investigated and compared. Apart from releasing most of their endogenous Ca (ranged from 96.9 to 97.9% removal) and Mg (ranged from 95.9 to 96.7% removal), simulated physiological conditions of the stomach also attenuated the possible adverse binding effect of the three sclerotial DFs to the exogenous minerals by lowering their cation-exchange capacity (ranged from 20.8 to 32.3%) and removing a substantial amount of their potential mineral chelators including protein (ranged from 16.2 to 37.8%) and phytate (ranged from 58.5 to 64.2%). The in vitro mineral binding capacity of the three sclerotial DF under simulated physiological conditions of small intestine was found to be low, especially for Ca (ranged from 4.79 to 5.91% binding) and Mg (ranged from 3.16 to 4.18% binding), and was highly correlated (r > 0.97) with their residual protein contents. Under simulated physiological conditions of the colon with slightly acidic pH (5.80), only bound Ca was readily released (ranged from 34.2 to 72.3% releasing) from the three sclerotial DFs, and their potential enhancing effect on passive Ca absorption in the human large intestine was also discussed.

  5. Design of the biosonar simulator for dolphin's clicks waveform reproduction

    NASA Astrophysics Data System (ADS)

    Ishii, Ken; Akamatsu, Tomonari; Hatakeyama, Yoshimi

    1992-03-01

    The emitted clicks of Dall's porpoises consist of a pulse train of burst signals with an ultrasonic carrier frequency. The authors have designed a biosonar simulator to reproduce the waveforms associated with a dolphin's clicks underwater. The total reproduction system consists of a click signal acquisition block, a waveform analysis block, a memory unit, a click simulator, and a underwater, ultrasonic wave transmitter. In operation, data stored in an EPROM (Erasable Programmable Read Only Memory) are read out sequentially by a fast clock and converted to analog output signals. Then an ultrasonic power amplifier reproduces these signals through a transmitter. The click signal replaying block is referred to as the BSS (Biosonar Simulator). This is what simulates the clicks. The details of the BSS are described in this report. A unit waveform is defined. The waveform is divided into a burst period and a waiting period. Clicks are a sequence based on a unit waveform, and digital data are sequentially read out from an EPROM of waveform data. The basic parameters of the BSS are as follows: (1) reading clock, 100 ns to 25.4 microseconds; (2) number of reading clock, 34 to 1024 times; (3) counter clock in a waiting period, 100 ns to 25.4 microseconds; (4) number of counter clock, zero to 16,777,215 times; (5) number of burst/waiting repetition cycle, one to 128 times; and (6) transmission level adjustment by a programmable attenuator, zero to 86.5 dB. These basic functions enable the BSS to replay clicks of Dall's porpoise precisely.

  6. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes

    USGS Publications Warehouse

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.

  7. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes

    PubMed Central

    Karacan, C.Özgen; Olea, Ricardo A.

    2015-01-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests. PMID:26190930

  8. Regional-scale integration of hydrological and geophysical data using Bayesian sequential simulation: application to field data

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus

    2013-04-01

    Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.

  9. Sequential Bayesian Geostatistical Inversion and Evaluation of Combined Data Worth for Aquifer Characterization at the Hanford 300 Area

    NASA Astrophysics Data System (ADS)

    Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.

    2010-12-01

    Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.

  10. Health-Promoting School Indicators: Schematic Models from Students

    ERIC Educational Resources Information Center

    Gabhainn, Saoirse Nic; Sixsmith, Jane; Delaney, Ellen-Nora; Moore, Miriam; Inchley, Jo; O'Higgins, Siobhan

    2007-01-01

    Purpose: The purpose of this paper is to outline a three-stage process for engaging with students to develop school level indicators of health in sequential class groups students first generated, then categorised indicators and finally developed schematic representations of their analyses. There is a political and practical need to develop…

  11. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  12. User's Guide of TOUGH2-EGS. A Coupled Geomechanical and Reactive Geochemical Simulator for Fluid and Heat Flow in Enhanced Geothermal Systems Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakcharoenphol, Perapon; Xiong, Yi; Hu, Litang

    TOUGH2-EGS is a numerical simulation program coupling geomechanics and chemical reactions for fluid and heat flows in porous media and fractured reservoirs of enhanced geothermal systems. The simulator includes the fully-coupled geomechanical (THM) module, the fully-coupled geochemical (THC) module, and the sequentially coupled reactive geochemistry (THMC) module. The fully-coupled flow-geomechanics model is developed from the linear elastic theory for the thermo-poro-elastic system and is formulated with the mean normal stress as well as pore pressure and temperature. The chemical reaction is sequentially coupled after solution of flow equations, which provides the flow velocity and phase saturation for the solute transportmore » calculation at each time step. In addition, reservoir rock properties, such as porosity and permeability, are subjected to change due to rock deformation and chemical reactions. The relationships between rock properties and geomechanical and chemical effects from poro-elasticity theories and empirical correlations are incorporated into the simulator. This report provides the user with detailed information on both mathematical models and instructions for using TOUGH2-EGS for THM, THC or THMC simulations. The mathematical models include the fluid and heat flow equations, geomechanical equation, reactive geochemistry equations, and discretization methods. Although TOUGH2-EGS has the capability for simulating fluid and heat flows coupled with both geomechanical and chemical effects, it is up to the users to select the specific coupling process, such as THM, THC, or THMC in a simulation. There are several example problems illustrating the applications of this program. These example problems are described in details and their input data are presented. The results demonstrate that this program can be used for field-scale geothermal reservoir simulation with fluid and heat flow, geomechanical effect, and chemical reaction in porous and fractured media.« less

  13. Spatiotemporal stochastic models for earth science and engineering applications

    NASA Astrophysics Data System (ADS)

    Luo, Xiaochun

    1998-12-01

    Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.

  14. Geostatistical mapping of effluent-affected sediment distribution on the Palos Verdes shelf

    USGS Publications Warehouse

    Murray, C.J.; Lee, H.J.; Hampton, M.A.

    2002-01-01

    Geostatistical techniques were used to study the spatial continuity of the thickness of effluent-affected sediment in the offshore Palos Verdes Margin area. The thickness data were measured directly from cores and indirectly from high-frequency subbottom profiles collected over the Palos Verdes Margin. Strong spatial continuity of the sediment thickness data was identified, with a maximum range of correlation in excess of 1.4 km. The spatial correlation showed a marked anisotropy, and was more than twice as continuous in the alongshore direction as in the cross-shelf direction. Sequential indicator simulation employing models fit to the thickness data variograms was used to map the distribution of the sediment, and to quantify the uncertainty in those estimates. A strong correlation between sediment thickness data and measurements of the mass of the contaminant p,p???-DDE per unit area was identified. A calibration based on the bivariate distribution of the thickness and p,p???-DDE data was applied using Markov-Bayes indicator simulation to extend the geostatistical study and map the contamination levels in the sediment. Integrating the map grids produced by the geostatistical study of the two variables indicated that 7.8 million m3 of effluent-affected sediment exist in the map area, containing approximately 61-72 Mg (metric tons) of p,p???-DDE. Most of the contaminated sediment (about 85% of the sediment and 89% of the p,p???-DDE) occurs in water depths < 100 m. The geostatistical study also indicated that the samples available for mapping are well distributed and the uncertainty of the estimates of the thickness and contamination level of the sediments is lowest in areas where the contaminated sediment is most prevalent. ?? 2002 Elsevier Science Ltd. All rights reserved.

  15. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    PubMed

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired before lesion as tested here.

  16. Influence of Sequential vs. Simultaneous Dual-Task Exercise Training on Cognitive Function in Older Adults

    PubMed Central

    Tait, Jamie L.; Duckham, Rachel L.; Milte, Catherine M.; Main, Luana C.; Daly, Robin M.

    2017-01-01

    Emerging research indicates that exercise combined with cognitive training may improve cognitive function in older adults. Typically these programs have incorporated sequential training, where exercise and cognitive training are undertaken separately. However, simultaneous or dual-task training, where cognitive and/or motor training are performed simultaneously with exercise, may offer greater benefits. This review summary provides an overview of the effects of combined simultaneous vs. sequential training on cognitive function in older adults. Based on the available evidence, there are inconsistent findings with regard to the cognitive benefits of sequential training in comparison to cognitive or exercise training alone. In contrast, simultaneous training interventions, particularly multimodal exercise programs in combination with secondary tasks regulated by sensory cues, have significantly improved cognition in both healthy older and clinical populations. However, further research is needed to determine the optimal characteristics of a successful simultaneous training program for optimizing cognitive function in older people. PMID:29163146

  17. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    ERIC Educational Resources Information Center

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-01-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…

  18. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  19. Landscape analysis software tools

    Treesearch

    Don Vandendriesche

    2008-01-01

    Recently, several new computer programs have been developed to assist in landscape analysis. The “Sequential Processing Routine for Arraying Yields” (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...

  20. Numerical simulation of transport and sequential biodegradation of chlorinated aliphatic hydrocarbons using CHAIN_2D

    NASA Astrophysics Data System (ADS)

    Schaerlaekens, J.; Mallants, D.; Imûnek, J.; van Genuchten, M. Th.; Feyen, J.

    1999-12-01

    Microbiological degradation of perchloroethylene (PCE) under anaerobic conditions follows a series of chain reactions, in which, sequentially, trichloroethylene (TCE), cis-dichloroethylene (c-DCE), vinylchloride (VC) and ethene are generated. First-order degradation rate constants, partitioning coefficients and mass exchange rates for PCE, TCE, c-DCE and VC were compiled from the literature. The parameters were used in a case study of pump-and-treat remediation of a PCE-contaminated site near Tilburg, The Netherlands. Transport, non-equilibrium sorption and biodegradation chain processes at the site were simulated using the CHAIN_2D code without further calibration. The modelled PCE compared reasonably well with observed PCE concentrations in the pumped water. We also performed a scenario analysis by applying several increased reductive dechlorination rates, reflecting different degradation conditions (e.g. addition of yeast extract and citrate). The scenario analysis predicted considerably higher concentrations of the degradation products as a result of enhanced reductive dechlorination of PCE. The predicted levels of the very toxic compound VC were now an order of magnitude above the maximum permissible concentration levels.

  1. Multisensor surveillance data augmentation and prediction with optical multipath signal processing

    NASA Astrophysics Data System (ADS)

    Bush, G. T., III

    1980-12-01

    The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.

  2. High-Fidelity Simulation for Advanced Cardiac Life Support Training

    PubMed Central

    Davis, Lindsay E.; Storjohann, Tara D.; Spiegel, Jacqueline J.; Beiber, Kellie M.

    2013-01-01

    Objective. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. Design. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). Assessment. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students’ knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. Conclusions. College curricula should incorporate simulation to complement but not replace lecture for ACLS education. PMID:23610477

  3. High-fidelity simulation for advanced cardiac life support training.

    PubMed

    Davis, Lindsay E; Storjohann, Tara D; Spiegel, Jacqueline J; Beiber, Kellie M; Barletta, Jeffrey F

    2013-04-12

    OBJECTIVE. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. DESIGN. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). ASSESSMENT. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students' knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. CONCLUSIONS. College curricula should incorporate simulation to complement but not replace lecture for ACLS education.

  4. An extended sequential goodness-of-fit multiple testing method for discrete data.

    PubMed

    Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo

    2017-10-01

    The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.

  5. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  6. Electrophysiological Correlates of Familiarity and Recollection in Associative Recognition: Contributions of Perceptual and Conceptual Processing to Unitization

    PubMed Central

    Li, Bingcan; Mao, Xinrui; Wang, Yujuan; Guo, Chunyan

    2017-01-01

    It is generally accepted that associative recognition memory is supported by recollection. In addition, recent research indicates that familiarity can support associative memory, especially when two items are unitized into a single item. Both perceptual and conceptual manipulations can be used to unitize items, but few studies have compared these two methods of unitization directly. In the present study, we investigated the effects of familiarity and recollection on successful retrieval of items that were unitized perceptually or conceptually. Participants were instructed to remember either a Chinese two-character compound or unrelated word-pairs, which were presented simultaneously or sequentially. Participants were then asked to recognize whether word-pairs were intact or rearranged. Event-related potential (ERP) recordings were performed during the recognition phase of the study. Two-character compounds were better discriminated than unrelated word-pairs and simultaneous presentation was found to elicit better discrimination than sequential presentation for unrelated word-pairs only. ERP recordings indicated that the early intact/rearranged effects (FN400), typically associated with familiarity, were elicited in compound word-pairs with both simultaneous and sequential presentation, and in simultaneously presented unrelated word-pairs, but not in sequentially presented unrelated word-pairs. In contrast, the late positive complex (LPC) effects associated with recollection were elicited in all four conditions. Together, these results indicate that while the engagement of familiarity in associative recognition is affected by both perceptual and conceptual unitization, conceptual unitization promotes a higher level of unitization (LOU). In addition, the engagement of recollection was not affected by unitized manipulations. It should be noted, however, that due to experimental design, the effects presented here may be due to semantic rather than episodic memory and future studies should take this into consideration when manipulating rearranged pairs. PMID:28400723

  7. Building merger trees from cosmological N-body simulations. Towards improving galaxy formation models using subhaloes

    NASA Astrophysics Data System (ADS)

    Tweed, D.; Devriendt, J.; Blaizot, J.; Colombi, S.; Slyz, A.

    2009-11-01

    Context: In the past decade or so, using numerical N-body simulations to describe the gravitational clustering of dark matter (DM) in an expanding universe has become the tool of choice for tackling the issue of hierarchical galaxy formation. As mass resolution increases with the power of supercomputers, one is able to grasp finer and finer details of this process, resolving more and more of the inner structure of collapsed objects. This begs one to revisit time and again the post-processing tools with which one transforms particles into “invisible” dark matter haloes and from thereon into luminous galaxies. Aims: Although a fair amount of work has been devoted to growing Monte-Carlo merger trees that resemble those built from an N-body simulation, comparatively little effort has been invested in quantifying the caveats one necessarily encounters when one extracts trees directly from such a simulation. To somewhat revert the tide, this paper seeks to provide its reader with a comprehensive study of the problems one faces when following this route. Methods: The first step in building merger histories of dark matter haloes and their subhaloes is to identify these structures in each of the time outputs (snapshots) produced by the simulation. Even though we discuss a particular implementation of such an algorithm (called AdaptaHOP) in this paper, we believe that our results do not depend on the exact details of the implementation but instead extend to most if not all (sub)structure finders. To illustrate this point in the appendix we compare AdaptaHOP's results to the standard friend-of-friend (FOF) algorithm, widely utilised in the astrophysical community. We then highlight different ways of building merger histories from AdaptaHOP haloes and subhaloes, contrasting their various advantages and drawbacks. Results: We find that the best approach to (sub)halo merging histories is through an analysis that goes back and forth between identification and tree building rather than one that conducts a straightforward sequential treatment of these two steps. This is rooted in the complexity of the merging trees that have to depict an inherently dynamical process from the partial temporal information contained in the collection of instantaneous snapshots available from the N-body simulation. However, we also propose a simpler sequential “Most massive Substructure Method” (MSM) whose trees approximate those obtained via the more complicated non sequential method. Appendices are only available in electronic form at: http://www.aanda.org

  8. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  9. A Sequential Ensemble Prediction System at Convection Permitting Scales

    NASA Astrophysics Data System (ADS)

    Milan, M.; Simmer, C.

    2012-04-01

    A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.

  10. Program Completion of a Web-Based Tailored Lifestyle Intervention for Adults: Differences between a Sequential and a Simultaneous Approach

    PubMed Central

    Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ

    2012-01-01

    Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770

  11. Program completion of a web-based tailored lifestyle intervention for adults: differences between a sequential and a simultaneous approach.

    PubMed

    Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J

    2012-03-08

    Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.

  12. Lattice modification in KTiOPO4 by hydrogen and helium sequentially implantation in submicrometer depth

    NASA Astrophysics Data System (ADS)

    Ma, Changdong; Lu, Fei; Xu, Bo; Fan, Ranran

    2016-05-01

    We investigated lattice modification and its physical mechanism in H and He co-implanted, z-cut potassium titanyl phosphate (KTiOPO4). The samples were implanted with 110 keV H and 190 keV He, both to a fluence of 4 × 1016 cm-2, at room temperature. Rutherford backscattering/channeling, high-resolution x-ray diffraction, and transmission electron microscopy were used to examine the implantation-induced structural changes and strain. Experimental and simulated x-ray diffraction results show that the strain in the implanted KTiOPO4 crystal is caused by interstitial atoms. The strain and stress are anisotropic and depend on the crystal's orientation. Transmission electron microscopy studies indicate that ion implantation produces many dislocations in the as-implanted samples. Annealing can induce ion aggregation to form nanobubbles, but plastic deformation and ion out-diffusion prevent the KTiOPO4 surface from blistering.

  13. Correlations between emission timescale of fragments and isospin dynamics in 124Sn+64Ni and 112Sn+58Ni reactions at 35A MeV

    NASA Astrophysics Data System (ADS)

    De Filippo, E.; Pagano, A.; Russotto, P.; Amorini, F.; Anzalone, A.; Auditore, L.; Baran, V.; Berceanu, I.; Borderie, B.; Bougault, R.; Bruno, M.; Cap, T.; Cardella, G.; Cavallaro, S.; Chatterjee, M. B.; Chbihi, A.; Colonna, M.; D'Agostino, M.; Dayras, R.; Di Toro, M.; Frankland, J.; Galichet, E.; Gawlikowicz, W.; Geraci, E.; Grzeszczuk, A.; Guazzoni, P.; Kowalski, S.; La Guidara, E.; Lanzalone, G.; Lanzanò, G.; Le Neindre, N.; Lombardo, I.; Maiolino, C.; Papa, M.; Piasecki, E.; Pirrone, S.; Płaneta, R.; Politi, G.; Pop, A.; Porto, F.; Rivet, M. F.; Rizzo, F.; Rosato, E.; Schmidt, K.; Siwek-Wilczyńska, K.; Skwira-Chalot, I.; Trifirò, A.; Trimarchi, M.; Verde, G.; Vigilante, M.; Wieleczko, J. P.; Wilczyński, J.; Zetta, L.; Zipper, W.

    2012-07-01

    We present a new experimental method to correlate the isotopic composition of intermediate mass fragments (IMF) emitted at midrapidity in semiperipheral collisions with the emission timescale: IMFs emitted in the early stage of the reaction show larger values of isospin asymmetry, stronger angular anisotropies, and reduced odd-even staggering effects in neutron to proton ratio distributions than those produced in sequential statistical emission. All these effects support the concept of isospin “migration”, that is sensitive to the density gradient between participant and quasispectator nuclear matter, in the so called neck fragmentation mechanism. By comparing the data to a stochastic mean field (SMF) simulation we show that this method gives valuable constraints on the symmetry energy term of nuclear equation of state at subsaturation densities. An indication emerges for a linear density dependence of the symmetry energy.

  14. Cost-effectiveness of the sequential application of tyrosine kinase inhibitors for the treatment of chronic myeloid leukemia.

    PubMed

    Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe

    2015-01-01

    Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.

  15. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  16. Flexible sequential designs for multi-arm clinical trials.

    PubMed

    Magirr, D; Stallard, N; Jaki, T

    2014-08-30

    Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Feel, imagine and learn! - Haptic augmented simulation and embodied instruction in physics learning

    NASA Astrophysics Data System (ADS)

    Han, In Sook

    The purpose of this study was to investigate the potentials and effects of an embodied instructional model in abstract concept learning. This embodied instructional process included haptic augmented educational simulation as an instructional tool to provide perceptual experiences as well as further instruction to activate those previous experiences with perceptual simulation. In order to verify the effectiveness of this instructional model, haptic augmented simulation with three different haptic levels (force and kinesthetic, kinesthetic, and non-haptic) and instructional materials (narrative and expository) were developed and their effectiveness tested. 220 fifth grade students were recruited to participate in the study from three elementary schools located in lower SES neighborhoods in Bronx, New York. The study was conducted for three consecutive weeks in regular class periods. The data was analyzed using ANCOVA, ANOVA, and MANOVA. The result indicates that haptic augmented simulations, both the force and kinesthetic and the kinesthetic simulations, was more effective than the non-haptic simulation in providing perceptual experiences and helping elementary students to create multimodal representations about machines' movements. However, in most cases, force feedback was needed to construct a fully loaded multimodal representation that could be activated when the instruction with less sensory modalities was being given. In addition, the force and kinesthetic simulation was effective in providing cognitive grounding to comprehend a new learning content based on the multimodal representation created with enhanced force feedback. Regarding the instruction type, it was found that the narrative and the expository instructions did not make any difference in activating previous perceptual experiences. These findings suggest that it is important to help students to make a solid cognitive ground with perceptual anchor. Also, sequential abstraction process would deepen students' understanding by providing an opportunity to practice their mental simulation by removing sensory modalities used one by one and to gradually reach abstract level of understanding where students can imagine the machine's movements and working mechanisms with only abstract language without any perceptual supports.

  18. The effects of using high-fidelity simulators and standardized patients on the thorax, lung, and cardiac examination skills of undergraduate nursing students.

    PubMed

    Tuzer, Hilal; Dinc, Leyla; Elcin, Melih

    2016-10-01

    Existing research literature indicates that the use of various simulation techniques in the training of physical examination skills develops students' cognitive and psychomotor abilities in a realistic learning environment while improving patient safety. The study aimed to compare the effects of the use of a high-fidelity simulator and standardized patients on the knowledge and skills of students conducting thorax-lungs and cardiac examinations, and to explore the students' views and learning experiences. A mixed-method explanatory sequential design. The study was conducted in the Simulation Laboratory of a Nursing School, the Training Center at the Faculty of Medicine, and in the inpatient clinics of the Education and Research Hospital. Fifty-two fourth-year nursing students. Students were randomly assigned to Group I and Group II. The students in Group 1 attended the thorax-lungs and cardiac examination training using a high-fidelity simulator, while the students in Group 2 using standardized patients. After the training sessions, all students practiced their skills on real patients in the clinical setting under the supervision of the investigator. Knowledge and performance scores of all students increased following the simulation activities; however, the students that worked with standardized patients achieved significantly higher knowledge scores than those that worked with the high-fidelity simulator; however, there was no significant difference in performance scores between the groups. The mean performance scores of students on real patients were significantly higher compared to the post-simulation assessment scores (p<0.001). Results of this study revealed that use of standardized patients was more effective than the use of a high-fidelity simulator in increasing the knowledge scores of students on thorax-lungs and cardiac examinations; however, practice on real patients increased performance scores of all students without any significant difference in two groups. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  20. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  1. Progressive transmission of pseudo-color images. Appendix 1: Item 4. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Hadenfeldt, Andrew C.

    1991-01-01

    The transmission of digital images can require considerable channel bandwidth. The cost of obtaining such a channel can be prohibitive, or the channel might simply not be available. In this case, progressive transmission (PT) can be useful. PT presents the user with a coarse initial image approximation, and then proceeds to refine it. In this way, the user tends to receive information about the content of the image sooner than if a sequential transmission method is used. PT finds application in image data base browsing, teleconferencing, medical and other applications. A PT scheme is developed for use with a particular type of image data, the pseudo-color or color mapped image. Such images consist of a table of colors called a colormap, plus a 2-D array of index values which indicate which colormap entry is to be used to display a given pixel. This type of image presents some unique problems for a PT coder, and techniques for overcoming these problems are developed. A computer simulation of the color mapped PT scheme is developed to evaluate its performance. Results of simulation using several test images are presented.

  2. Sequentially-crosslinked biomimetic bioactive glass/gelatin methacryloyl composites hydrogels for bone regeneration.

    PubMed

    Zheng, Jiafu; Zhao, Fujian; Zhang, Wen; Mo, Yunfei; Zeng, Lei; Li, Xian; Chen, Xiaofeng

    2018-08-01

    In recent years, gelatin-based composites hydrogels have been intensively investigated because of their inherent bioactivity, biocompatibility and biodegradability. Herein, we fabricated photocrosslinkable biomimetic composites hydrogels from bioactive glass (BG) and gelatin methacryloyl (GelMA) by a sequential physical and chemical crosslinking (gelation + UV) approach. The results showed that the compressive modulus of composites hydrogels increased significantly through the sequential crosslinking approach. The addition of BG resulted in a significant increase in physiological stability and apatite-forming ability. In vitro data indicated that BG/GelMA composites hydrogels promoted cell attachment, proliferation and differentiation. Overall, the BG/GelMA composites hydrogels combined the advantages of good biocompatibility and bioactivity, and had potential applications in bone regeneration. Copyright © 2018. Published by Elsevier B.V.

  3. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  4. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  5. Isobaric yield ratio difference between the 140 A MeV 58Ni + 9Be and 64Ni +9Be reactions studied by the antisymmetric molecular dynamics model

    NASA Astrophysics Data System (ADS)

    Qiao, C. Y.; Wei, H. L.; Ma, C. W.; Zhang, Y. L.; Wang, S. S.

    2015-07-01

    Background: The isobaric yield ratio difference (IBD) method is found to be sensitive to the density difference of neutron-rich nucleus induced reaction around the Fermi energy. Purpose: An investigation is performed to study the IBD results in the transport model. Methods: The antisymmetric molecular dynamics (AMD) model plus the sequential decay model gemini are adopted to simulate the 140 A MeV 58 ,64Ni +9Be reactions. A relative small coalescence radius Rc= 2.5 fm is used for the phase space at t = 500 fm/c to form the hot fragment. Two limitations on the impact parameter (b 1 =0 -2 fm and b 2 =0 -9 fm) are used to study the effect of central collisions in IBD. Results: The isobaric yield ratios (IYRs) for the large-A fragments are found to be suppressed in the symmetric reaction. The IBD results for fragments with neutron excess I = 0 and 1 are obtained. A small difference is found in the IBDs with the b 1 and b 2 limitations in the AMD simulated reactions. The IBD with b 1 and b 2 are quite similar in the AMD + GEMINI simulated reactions. Conclusions: The IBDs for the I =0 and 1 chains are mainly determined by the central collisions, which reflects the nuclear density in the core region of the reaction system. The increasing part of the IBD distribution is found due to the difference between the densities in the peripheral collisions of the reactions. The sequential decay process influences the IBD results. The AMD + GEMINI simulation can better reproduce the experimental IBDs than the AMD simulation.

  6. Constant speed control of four-stroke micro internal combustion swing engine

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun

    2015-09-01

    The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.

  7. Stochastic uncertainty analysis for solute transport in randomly heterogeneous media using a Karhunen‐Loève‐based moment equation approach

    USGS Publications Warehouse

    Liu, Gaisheng; Lu, Zhiming; Zhang, Dongxiao

    2007-01-01

    A new approach has been developed for solving solute transport problems in randomly heterogeneous media using the Karhunen‐Loève‐based moment equation (KLME) technique proposed by Zhang and Lu (2004). The KLME approach combines the Karhunen‐Loève decomposition of the underlying random conductivity field and the perturbative and polynomial expansions of dependent variables including the hydraulic head, flow velocity, dispersion coefficient, and solute concentration. The equations obtained in this approach are sequential, and their structure is formulated in the same form as the original governing equations such that any existing simulator, such as Modular Three‐Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems (MT3DMS), can be directly applied as the solver. Through a series of two‐dimensional examples, the validity of the KLME approach is evaluated against the classical Monte Carlo simulations. Results indicate that under the flow and transport conditions examined in this work, the KLME approach provides an accurate representation of the mean concentration. For the concentration variance, the accuracy of the KLME approach is good when the conductivity variance is 0.5. As the conductivity variance increases up to 1.0, the mismatch on the concentration variance becomes large, although the mean concentration can still be accurately reproduced by the KLME approach. Our results also indicate that when the conductivity variance is relatively large, neglecting the effects of the cross terms between velocity fluctuations and local dispersivities, as done in some previous studies, can produce noticeable errors, and a rigorous treatment of the dispersion terms becomes more appropriate.

  8. Simulated impact of climate change on hydrology of multiple watersheds using traditional and recommended snowmelt runoff model methodology

    USDA-ARS?s Scientific Manuscript database

    For more than three decades, researchers have utilized the Snowmelt Runoff Model (SRM) to test the impacts of climate change on streamflow of snow-fed systems. In this study, the hydrological effects of climate change are modeled over three sequential years using SRM with both typical and recommende...

  9. The influence of spatial congruency and movement preparation time on saccade curvature in simultaneous and sequential dual-tasks.

    PubMed

    Moehler, Tobias; Fiehler, Katja

    2015-11-01

    Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  11. A high-performance ground-based prototype of horn-type sequential vegetable production facility for life support system in space

    NASA Astrophysics Data System (ADS)

    Fu, Yuming; Liu, Hui; Shao, Lingzhi; Wang, Minjuan; Berkovich, Yu A.; Erokhin, A. N.; Liu, Hong

    2013-07-01

    Vegetable cultivation plays a crucial role in dietary supplements and psychosocial benefits of the crew during manned space flight. Here we developed a ground-based prototype of horn-type sequential vegetable production facility, named Horn-type Producer (HTP), which was capable of simulating the microgravity effect and the continuous cultivation of leaf-vegetables on root modules. The growth chamber of the facility had a volume of 0.12 m3, characterized by a three-stage space expansion with plant growth. The planting surface of 0.154 m2 was comprised of six ring-shaped root modules with a fibrous ion-exchange resin substrate. Root modules were fastened to a central porous tube supplying water, and moved forward with plant growth. The total illuminated crop area of 0.567 m2 was provided by a combination of red and white light emitting diodes on the internal surfaces. In tests with a 24-h photoperiod, the productivity of the HTP at 0.3 kW for lettuce achieved 254.3 g eatable biomass per week. Long-term operation of the HTP did not alter vegetable nutrition composition to any great extent. Furthermore, the efficiency of the HTP, based on the Q-criterion, was 7 × 10-4 g2 m-3 J-1. These results show that the HTP exhibited high productivity, stable quality, and good efficiency in the process of planting lettuce, indicative of an interesting design for space vegetable production.

  12. Distributed Wireless Power Transfer With Energy Feedback

    NASA Astrophysics Data System (ADS)

    Lee, Seunghyun; Zhang, Rui

    2017-04-01

    Energy beamforming (EB) is a key technique for achieving efficient radio-frequency (RF) transmission enabled wireless energy transfer (WET). By optimally designing the waveforms from multiple energy transmitters (ETs) over the wireless channels, they can be constructively combined at the energy receiver (ER) to achieve an EB gain that scales with the number of ETs. However, the optimal design of EB waveforms requires accurate channel state information (CSI) at the ETs, which is challenging to obtain practically, especially in a distributed system with ETs at separate locations. In this paper, we study practical and efficient channel training methods to achieve optimal EB in a distributed WET system. We propose two protocols with and without centralized coordination, respectively, where distributed ETs either sequentially or in parallel adapt their transmit phases based on a low-complexity energy feedback from the ER. The energy feedback only depends on the received power level at the ER, where each feedback indicates one particular transmit phase that results in the maximum harvested power over a set of previously used phases. Simulation results show that the two proposed training protocols converge very fast in practical WET systems even with a large number of distributed ETs, while the protocol with sequential ET phase adaptation is also analytically shown to converge to the optimal EB design with perfect CSI by increasing the training time. Numerical results are also provided to evaluate the performance of the proposed distributed EB and training designs as compared to other benchmark schemes.

  13. The B Cell Response to Foot-and-Mouth Disease Virus in Cattle following Sequential Vaccination with Multiple Serotypes.

    PubMed

    Grant, Clare F J; Carr, B Veronica; Kotecha, Abhay; van den Born, Erwin; Stuart, David I; Hammond, John A; Charleston, Bryan

    2017-05-01

    Foot-and-mouth disease virus (FMDV) is a highly contagious viral disease. Antibodies are pivotal in providing protection against FMDV infection. Serological protection against one FMDV serotype does not confer interserotype protection. However, some historical data have shown that interserotype protection can be induced following sequential FMDV challenge with multiple FMDV serotypes. In this study, we have investigated the kinetics of the FMDV-specific antibody-secreting cell (ASC) response following homologous and heterologous inactivated FMDV vaccination regimes. We have demonstrated that the kinetics of the B cell response are similar for all four FMDV serotypes tested following a homologous FMDV vaccination regime. When a heterologous vaccination regime was used with the sequential inoculation of three different inactivated FMDV serotypes (O, A, and Asia1 serotypes) a B cell response to FMDV SAT1 and serotype C was induced. The studies also revealed that the local lymphoid tissue had detectable FMDV-specific ASCs in the absence of circulating FMDV-specific ASCs, indicating the presence of short-lived ASCs, a hallmark of a T-independent 2 (TI-2) antigenic response to inactivated FMDV capsid. IMPORTANCE We have demonstrated the development of intraserotype response following a sequential vaccination regime of four different FMDV serotypes. We have found indication of short-lived ASCs in the local lymphoid tissue, further evidence of a TI-2 response to FMDV. Copyright © 2017 American Society for Microbiology.

  14. In vivo comparison of simultaneous versus sequential injection technique for thermochemical ablation in a porcine model.

    PubMed

    Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C

    2012-01-01

    To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.

  15. A sequential analysis of classroom discourse in Italian primary schools: the many faces of the IRF pattern.

    PubMed

    Molinari, Luisa; Mameli, Consuelo; Gnisci, Augusto

    2013-09-01

    A sequential analysis of classroom discourse is needed to investigate the conditions under which the triadic initiation-response-feedback (IRF) pattern may host different teaching orientations. The purpose of the study is twofold: first, to describe the characteristics of classroom discourse and, second, to identify and explore the different interactive sequences that can be captured with a sequential statistical analysis. Twelve whole-class activities were video recorded in three Italian primary schools. We observed classroom interaction as it occurs naturally on an everyday basis. In total, we collected 587 min of video recordings. Subsequently, 828 triadic IRF patterns were extracted from this material and analysed with the programme Generalized Sequential Query (GSEQ). The results indicate that classroom discourse may unfold in different ways. In particular, we identified and described four types of sequences. Dialogic sequences were triggered by authentic questions, and continued through further relaunches. Monologic sequences were directed to fulfil the teachers' pre-determined didactic purposes. Co-constructive sequences fostered deduction, reasoning, and thinking. Scaffolding sequences helped and sustained children with difficulties. The application of sequential analyses allowed us to show that interactive sequences may account for a variety of meanings, thus making a significant contribution to the literature and research practice in classroom discourse. © 2012 The British Psychological Society.

  16. 3D Reservoir Modeling of Semutang Gas Field: A lonely Gas field in Chittagong-Tripura Fold Belt, with Integrated Well Log, 2D Seismic Reflectivity and Attributes.

    NASA Astrophysics Data System (ADS)

    Salehin, Z.; Woobaidullah, A. S. M.; Snigdha, S. S.

    2015-12-01

    Bengal Basin with its prolific gas rich province provides needed energy to Bangladesh. Present energy situation demands more Hydrocarbon explorations. Only 'Semutang' is discovered in the high amplitude structures, where rest of are in the gentle to moderate structures of western part of Chittagong-Tripura Fold Belt. But it has some major thrust faults which have strongly breached the reservoir zone. The major objectives of this research are interpretation of gas horizons and faults, then to perform velocity model, structural and property modeling to obtain reservoir properties. It is needed to properly identify the faults and reservoir heterogeneities. 3D modeling is widely used to reveal the subsurface structure in faulted zone where planning and development drilling is major challenge. Thirteen 2D seismic and six well logs have been used to identify six gas bearing horizons and a network of faults and to map the structure at reservoir level. Variance attributes were used to identify faults. Velocity model is performed for domain conversion. Synthetics were prepared from two wells where sonic and density logs are available. Well to seismic tie at reservoir zone shows good match with Direct Hydrocarbon Indicator on seismic section. Vsh, porosity, water saturation and permeability have been calculated and various cross plots among porosity logs have been shown. Structural modeling is used to make zone and layering accordance with minimum sand thickness. Fault model shows the possible fault network, those liable for several dry wells. Facies model have been constrained with Sequential Indicator Simulation method to show the facies distribution along the depth surfaces. Petrophysical models have been prepared with Sequential Gaussian Simulation to estimate petrophysical parameters away from the existing wells to other parts of the field and to observe heterogeneities in reservoir. Average porosity map for each gas zone were constructed. The outcomes of the research are an improved subsurface image of the seismic data (model), a porosity prediction for the reservoir, a reservoir quality map and also a fault map. The result shows a complex geologic model which may contribute to the economic potential of the field. For better understanding, 3D seismic survey, uncertainty and attributes analysis are necessary.

  17. United Kingdom national paediatric bilateral project: Demographics and results of localization and speech perception testing.

    PubMed

    Cullington, H E; Bele, D; Brinton, J C; Cooper, S; Daft, M; Harding, J; Hatton, N; Humphries, J; Lutman, M E; Maddocks, J; Maggs, J; Millward, K; O'Donoghue, G; Patel, S; Rajput, K; Salmon, V; Sear, T; Speers, A; Wheeler, A; Wilson, K

    2017-01-01

    To assess longitudinal outcomes in a large and varied population of children receiving bilateral cochlear implants both simultaneously and sequentially. This observational non-randomized service evaluation collected localization and speech recognition in noise data from simultaneously and sequentially implanted children at four time points: before bilateral cochlear implants or before the sequential implant, 1 year, 2 years, and 3 years after bilateral implants. No inclusion criteria were applied, so children with additional difficulties, cochleovestibular anomalies, varying educational placements, 23 different home languages, a full range of outcomes and varying device use were included. 1001 children were included: 465 implanted simultaneously and 536 sequentially, representing just over 50% of children receiving bilateral implants in the UK in this period. In simultaneously implanted children the median age at implant was 2.1 years; 7% were implanted at less than 1 year of age. In sequentially implanted children the interval between implants ranged from 0.1 to 14.5 years. Children with simultaneous bilateral implants localized better than those with one implant. On average children receiving a second (sequential) cochlear implant showed improvement in localization and listening in background noise after 1 year of bilateral listening. The interval between sequential implants had no effect on localization improvement although a smaller interval gave more improvement in speech recognition in noise. Children with sequential implants on average were able to use their second device to obtain spatial release from masking after 2 years of bilateral listening. Although ranges were large, bilateral cochlear implants on average offered an improvement in localization and speech perception in noise over unilateral implants. These data represent the diverse population of children with bilateral cochlear implants in the UK from 2010 to 2012. Predictions of outcomes for individual patients are not possible from these data. However, there are no indications to preclude children with long inter-implant interval having the chance of a second cochlear implant.

  18. Sequential transformation of the structural and thermodynamic parameters of the complex particles, combining covalent conjugate (sodium caseinate + maltodextrin) with polyunsaturated lipids stabilized by a plant antioxidant, in the simulated gastro-intestinal conditions in vitro.

    PubMed

    Antipova, Anna S; Zelikina, Darya V; Shumilina, Elena A; Semenova, Maria G

    2016-10-01

    The present work is focused on the structural transformation of the complexes, formed between covalent conjugate (sodium caseinate + maltodextrin) and an equimass mixture of the polyunsaturated lipids (PULs): (soy phosphatidylcholine + triglycerides of flaxseed oil) stabilized by a plant antioxidant (an essential oil of clove buds), in the simulated conditions of the gastrointestinal tract. The conjugate was used here as a food-grade delivery vehicle for the PULs. The release of these PULs at each stage of the simulated digestion was estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. A strategy for comprehensive identification of sequential constituents using ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap mass spectrometer, application study on chlorogenic acids in Flos Lonicerae Japonicae.

    PubMed

    Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang

    2016-01-15

    The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  1. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    PubMed

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  2. Particle connectedness and cluster formation in sequential depositions of particles: Integral-equation theory

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu; Glandt, Eduardo D.

    2004-11-01

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  3. The application of intraoperative transit time flow measurement to accurately assess anastomotic quality in sequential vein grafting

    PubMed Central

    Yu, Yang; Zhang, Fan; Gao, Ming-Xin; Li, Hai-Tao; Li, Jing-Xing; Song, Wei; Huang, Xin-Sheng; Gu, Cheng-Xiong

    2013-01-01

    OBJECTIVES Intraoperative transit time flow measurement (TTFM) is widely used to assess anastomotic quality in coronary artery bypass grafting (CABG). However, in sequential vein grafting, the flow characteristics collected by the conventional TTFM method are usually associated with total graft flow and might not accurately indicate the quality of every distal anastomosis in a sequential graft. The purpose of our study was to examine a new TTFM method that could assess the quality of each distal anastomosis in a sequential graft more reliably than the conventional TTFM approach. METHODS Two TTFM methods were tested in 84 patients who underwent sequential saphenous off-pump CABG in Beijing An Zhen Hospital between April and August 2012. In the conventional TTFM method, normal blood flow in the sequential graft was maintained during the measurement, and the flow probe was placed a few centimetres above the anastomosis to be evaluated. In the new method, blood flow in the sequential graft was temporarily reduced during the measurement by placing an atraumatic bulldog clamp at the graft a few centimetres distal to the anastomosis to be evaluated, while the position of the flow probe remained the same as in the conventional method. This new TTFM method was named the flow reduction TTFM. Graft flow parameters measured by both methods were compared. RESULTS Compared with the conventional TTFM, the flow reduction TTFM resulted in significantly lower mean graft blood flow (P < 0.05); in contrast, yielded significantly higher pulsatility index (P < 0.05). Diastolic filling was not significantly different between the two methods and was >50% in both cases. Interestingly, the flow reduction TTFM identified two defective middle distal anastomoses that the conventional TTFM failed to detect. Graft flows near the defective distal anastomoses were improved substantially after revision. CONCLUSIONS In this study, we found that temporary reduction of graft flow during TTFM seemed to enhance the sensitivity of TTFM to less-than-critical anastomotic defects in a sequential graft and to improve the overall accuracy of the intraoperative assessment of anastomotic quality in sequential vein grafting. PMID:24000314

  4. Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.

  5. First-principles simulations of heat transport

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    2017-11-01

    Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.

  6. Sequential mediating effects of provided and received social support on trait emotional intelligence and subjective happiness: A longitudinal examination in Hong Kong Chinese university students.

    PubMed

    Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L

    2018-04-03

    Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.

  7. Judgments relative to patterns: how temporal sequence patterns affect judgments and memory.

    PubMed

    Kusev, Petko; Ayton, Peter; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Stewart, Neil; Chater, Nick

    2011-12-01

    Six experiments studied relative frequency judgment and recall of sequentially presented items drawn from 2 distinct categories (i.e., city and animal). The experiments show that judged frequencies of categories of sequentially encountered stimuli are affected by certain properties of the sequence configuration. We found (a) a first-run effect whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence and (b) a dissociation between judgments and recall; respondents may judge 1 event more likely than the other and yet recall more instances of the latter. Specifically, the distribution of recalled items does not correspond to the frequency estimates for the event categories, indicating that participants do not make frequency judgments by sampling their memory for individual items as implied by other accounts such as the availability heuristic (Tversky & Kahneman, 1973) and the availability process model (Hastie & Park, 1986). We interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns and offer an account for the relationship between memory and judged frequencies of sequentially encountered stimuli.

  8. Effect of metaphorical verbal instruction on modeling of sequential dance skills by young children.

    PubMed

    Sawada, Misako; Mori, Shiro; Ishii, Motonobu

    2002-12-01

    Metaphorical verbal instruction was compared to specific verbal instruction about movement in the modeling of sequential dance skills by young children. Two groups of participants (Younger, mean age 5:3 yr., n = 30: Older, mean age 6:2 yr., n = 30) were randomly assigned to conditions in a 2 (sex) x 2 (age [Younger and Older]) x 3 (verbal instruction [Metaphorical, Movement-relevant, and None]) factorial design. Order scores were calculated for both performance and recognition tests, comprising five acquisition trials and two retention trials after 24 hr., respectively. Analysis of variance indicated that the group given metaphorical instruction performed better than the other two instructions for both younger and older children. The results suggest that metaphorical verbal instruction aids the recognition and performance of sequential dance skills in young children.

  9. Error Control Coding Techniques for Space and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Costello, Daniel J., Jr.; Cabral, Hermano A.; He, Jiali

    1997-01-01

    Bootstrap Hybrid Decoding (BHD) (Jelinek and Cocke, 1971) is a coding/decoding scheme that adds extra redundancy to a set of convolutionally encoded codewords and uses this redundancy to provide reliability information to a sequential decoder. Theoretical results indicate that bit error probability performance (BER) of BHD is close to that of Turbo-codes, without some of their drawbacks. In this report we study the use of the Multiple Stack Algorithm (MSA) (Chevillat and Costello, Jr., 1977) as the underlying sequential decoding algorithm in BHD, which makes possible an iterative version of BHD.

  10. Fractionation and ecotoxicological implication of potentially toxic metals in sediments of three urban rivers and the Lagos Lagoon, Nigeria, West Africa.

    PubMed

    Oyeyiola, Aderonke O; Davidson, Christine M; Olayinka, Kehinde O; Alo, Babajide I

    2014-11-01

    The potential environmental impact of sediment-bound Cd, Cr, Cu, Pb and Zn in three trans-urban rivers in Lagos state and in the Lagos Lagoon was assessed by use of the modified Community Bureau of Reference (BCR) sequential extraction. The quality of the data was checked using BCR CRM 143R and BCR CRM 701. Good agreement was obtained between found and certified/indicative values. Of the rivers, the Odo-Iyaalaro, was generally the most contaminated and the Ibeshe the least. Higher concentrations of metals were generally found in the dry season compared to the wet season. Cadmium and Zn were released mostly in the acid exchangeable step of the sequential extraction, indicating that they have the greatest potential mobility and bioavailability of the analytes studied. Chromium and Cu were associated mainly with the reducible and oxidisable fractions, and Pb predominantly with the reducible and residual fractions. Sediments with the highest pseudototal analyte concentrations also released higher proportions of analytes earlier in the sequential extraction procedure. The study suggests that, during the dry season, potentially toxic metals (PTM) may accumulate in sediments in relatively labile forms that are released and can potentially be transported or bioaccumulate in the rainy season. Application of risk assessment codes and Hankanson potential risk indices indicated that Cd was the element of greatest concern in the Lagos Lagoon system. The study indicated that there is a need to strengthen environmental management and pollution control measures to reduce risk from PTM, but that even relatively simple strategies, such as seasonal restrictions on dredging and fishing, could be beneficial.

  11. The Condition of Education 2010 in Brief. NCES 2010-029

    ERIC Educational Resources Information Center

    Aud, Susan, Ed.; Hannes, Gretchen, Ed.

    2010-01-01

    This publication contains a sample of the indicators in "The Condition of Education 2010." The indicators in this publication are numbered sequentially, rather than according to their numbers in the complete edition. Since 1870, the federal government has gathered data about students, teachers, schools, and education funding. As mandated…

  12. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  13. Sequential Geoacoustic Filtering and Geoacoustic Inversion

    DTIC Science & Technology

    2015-09-30

    and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions

  14. Practical Sequential Design Procedures for Submarine ASW Search Operational Testing: A Simulation Study

    DTIC Science & Technology

    1998-10-01

    The efficient design of a free play , 24 hour per day, operational test (OT) of an ASW search system remains a challenge to the OT community. It will...efficient, realistic, free play , 24 hour per day OT. The basic test control premise described here is to stop the test event if the time without a

  15. Genetic consequences of sequential founder events by an island-colonizing bird.

    PubMed

    Clegg, Sonya M; Degnan, Sandie M; Kikkawa, Jiro; Moritz, Craig; Estoup, Arnaud; Owens, Ian P F

    2002-06-11

    The importance of founder events in promoting evolutionary changes on islands has been a subject of long-running controversy. Resolution of this debate has been hindered by a lack of empirical evidence from naturally founded island populations. Here we undertake a genetic analysis of a series of historically documented, natural colonization events by the silvereye species-complex (Zosterops lateralis), a group used to illustrate the process of island colonization in the original founder effect model. Our results indicate that single founder events do not affect levels of heterozygosity or allelic diversity, nor do they result in immediate genetic differentiation between populations. Instead, four to five successive founder events are required before indices of diversity and divergence approach that seen in evolutionarily old forms. A Bayesian analysis based on computer simulation allows inferences to be made on the number of effective founders and indicates that founder effects are weak because island populations are established from relatively large flocks. Indeed, statistical support for a founder event model was not significantly higher than for a gradual-drift model for all recently colonized islands. Taken together, these results suggest that single colonization events in this species complex are rarely accompanied by severe founder effects, and multiple founder events and/or long-term genetic drift have been of greater consequence for neutral genetic diversity.

  16. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  17. Spiking neural network model for memorizing sequences with forward and backward recall.

    PubMed

    Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João

    2013-06-01

    We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Comparison of DNA testing strategies in monitoring human papillomavirus infection prevalence through simulation.

    PubMed

    Lin, Carol Y; Li, Ling

    2016-11-07

    HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.

  19. In vitro pharmacodynamics of human simulated exposures of ceftaroline and daptomycin against MRSA, hVISA, and VISA with and without prior vancomycin exposure.

    PubMed

    Bhalodi, Amira A; Hagihara, Mao; Nicolau, David P; Kuti, Joseph L

    2014-01-01

    The effects of prior vancomycin exposure on ceftaroline and daptomycin therapy against methicillin-resistant Staphylococcus aureus (MRSA) have not been widely studied. Humanized free-drug exposures of vancomycin at 1 g every 12 h (q12h), ceftaroline at 600 mg q12h, and daptomycin at 10 mg/kg of body weight q24h were simulated in a 96-h in vitro pharmacodynamic model against three MRSA isolates, including one heteroresistant vancomycin-intermediate S. aureus (hVISA) isolate and one VISA isolate. A total of five regimens were tested: vancomycin, ceftaroline, and daptomycin alone for the entire 96 h, and then sequential therapy with vancomycin for 48 h followed by ceftaroline or daptomycin for 48 h. Microbiological responses were measured by the changes in log10 CFU during 96 h from baseline. Control isolates grew to 9.16 ± 0.32, 9.13 ± 0.14, and 8.69 ± 0.28 log10 CFU for MRSA, hVISA, and VISA, respectively. Vancomycin initially achieved ≥3 log10 CFU reductions against the MRSA and hVISA isolates, followed by regrowth beginning at 48 h; minimal activity was observed against VISA. The change in 96-h log10 CFU was largest for sequential therapy with vancomycin followed by ceftaroline (-5.22 ± 1.2, P = 0.010 versus ceftaroline) and for sequential therapy with vancomycin followed by ceftaroline (-3.60 ± 0.6, P = 0.037 versus daptomycin), compared with daptomycin (-2.24 ± 1.0), vancomycin (-1.40 ± 1.8), and sequential therapy with vancomycin followed by daptomycin (-1.32 ± 1.0, P > 0.5 for the last three regimens). Prior exposure of vancomycin at 1 g q12h reduced the initial microbiological response of daptomycin, particularly for hVISA and VISA isolates, but did not affect the response of ceftaroline. In the scenario of poor vancomycin response for high-inoculum MRSA infection, a ceftaroline-containing regimen may be preferred.

  20. A specific PFT and sub-canopy structure for simulating oil palm in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Knohl, A.; Roupsard, O.; Bernoux, M.; LE Maire, G.; Panferov, O.; Kotowska, M.; Meijide, A.

    2015-12-01

    Towards an effort to quantify the effects of rainforests to oil palm conversion on land-atmosphere carbon, water and energy fluxes, a specific plant functional type (PFT) and sub-canopy structure are developed for simulating oil palm within the Community Land Model (CLM4.5). Current global land surface models only simulate annual crops beside natural vegetation. In this study, a multilayer oil palm subroutine is developed in CLM4.5 for simulating oil palm's phenology and carbon and nitrogen allocation. The oil palm has monopodial morphology and sequential phenology of around 40 stacked phytomers, each carrying a large leaf and a fruit bunch, forming a natural multilayer canopy. A sub-canopy phenological and physiological parameterization is thus introduced, so that multiple phytomer components develop simultaneously but according to their different phenological steps (growth, yield and senescence) at different canopy layers. This specific multilayer structure was proved useful for simulating canopy development in terms of leaf area index (LAI) and fruit yield in terms of carbon and nitrogen outputs in Jambi, Sumatra (Fan et al. 2015). The study supports that species-specific traits, such as palm's monopodial morphology and sequential phenology, are necessary representations in terrestrial biosphere models in order to accurately simulate vegetation dynamics and feedbacks to climate. Further, oil palm's multilayer structure allows adding all canopy-level calculations of radiation, photosynthesis, stomatal conductance and respiration, beside phenology, also to the sub-canopy level, so as to eliminate scale mismatch problem among different processes. A series of adaptations are made to the CLM model. Initial results show that the adapted multilayer radiative transfer scheme and the explicit represention of oil palm's canopy structure improve on simulating photosynthesis-light response curve. The explicit photosynthesis and dynamic leaf nitrogen calculations per canopy layer also enhance simulated CO2 flux when compared to eddy covariance flux data. More investigations on energy and water fluxes and nitrogen balance are being conducted. These new schemes would hopefully promote the understanding of climatic effects of the tropical land use transformation system.

  1. Simulated spaceflight effects on mating and pregnancy of rats

    NASA Technical Reports Server (NTRS)

    Sabelman, E. E.; Chetirkin, P. V.; Howard, R. M.

    1981-01-01

    The mating of rats was studied to determine the effects of: simulated reentry stresses at known stages of pregnancy, and full flight simulation, consisting of sequential launch stresses, group housing, mating opportunity, diet, simulated reentry, and postreentry isolation of male and female rats. Uterine contents, adrenal mass and abdominal fat as a proportion of body mass, duration of pregnancy, and number and sex of offspring were studied. It is found that: (1) parturition following full flight simulation was delayed relative to that of controls; (2) litter size was reduced and resorptions increased compared with previous matings in the same group of animals; and (3) abdominal fat was highly elevated in animals that were fed the Soviet paste diet. It is suggested that the combined effects of diet, stress, spacecraft environment, and weightlessness decreased the probability of mating or of viable pregnancies in the Cosmos 1129 flight and control animals.

  2. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  3. Sequential protein unfolding through a carbon nanotube pore

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghe; Zhang, Shuang; Weber, Jeffrey K.; Luan, Binquan; Zhou, Ruhong; Li, Jingyuan

    2016-06-01

    An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability.An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00410e

  4. A 2D modeling approach for fluid propagation during FE-forming simulation of continuously reinforced composites in wet compression moulding

    NASA Astrophysics Data System (ADS)

    Poppe, Christian; Dörr, Dominik; Henning, Frank; Kärger, Luise

    2018-05-01

    Wet compression moulding (WCM) provides large-scale production potential for continuously fiber reinforced components as a promising alternative to resin transfer moulding (RTM). Lower cycle times are possible due to parallelization of the process steps draping, infiltration and curing during moulding (viscous draping). Experimental and theoretical investigations indicate a strong mutual dependency between the physical mechanisms, which occur during draping and mould filling (fluid-structure-interaction). Thus, key process parameters, like fiber orientation, fiber volume fraction, cavity pressure and the amount and viscosity of the resin are physically coupled. To enable time and cost efficient product and process development throughout all design stages, accurate process simulation tools are desirable. Separated draping and mould filling simulation models, as appropriate for the sequential RTM-process, cannot be applied for the WCM process due to the above outlined physical couplings. Within this study, a two-dimensional Darcy-Propagation-Element (DPE-2D) based on a finite element formulation with additional control volumes (FE/CV) is presented, verified and applied to forming simulation of a generic geometry, as a first step towards a fluid-structure-interaction model taking into account simultaneous resin infiltration and draping. The model is implemented in the commercial FE-Solver Abaqus by means of several user subroutines considering simultaneous draping and 2D-infiltration mechanisms. Darcy's equation is solved with respect to a local fiber orientation. Furthermore, the material model can access the local fluid domain properties to update the mechanical forming material parameter, which enables further investigations on the coupled physical mechanisms.

  5. The simulation model of growth and cell divisions for the root apex with an apical cell in application to Azolla pinnata.

    PubMed

    Piekarska-Stachowiak, Anna; Nakielski, Jerzy

    2013-12-01

    In contrast to seed plants, the roots of most ferns have a single apical cell which is the ultimate source of all cells in the root. The apical cell has a tetrahedral shape and divides asymmetrically. The root cap derives from the distal division face, while merophytes derived from three proximal division faces contribute to the root proper. The merophytes are produced sequentially forming three sectors along a helix around the root axis. During development, they divide and differentiate in a predictable pattern. Such growth causes cell pattern of the root apex to be remarkably regular and self-perpetuating. The nature of this regularity remains unknown. This paper shows the 2D simulation model for growth of the root apex with the apical cell in application to Azolla pinnata. The field of growth rates of the organ, prescribed by the model, is of a tensor type (symplastic growth) and cells divide taking principal growth directions into account. The simulations show how the cell pattern in a longitudinal section of the apex develops in time. The virtual root apex grows realistically and its cell pattern is similar to that observed in anatomical sections. The simulations indicate that the cell pattern regularity results from cell divisions which are oriented with respect to principal growth directions. Such divisions are essential for maintenance of peri-anticlinal arrangement of cell walls and coordinated growth of merophytes during the development. The highly specific division program that takes place in merophytes prior to differentiation seems to be regulated at the cellular level.

  6. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  7. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  8. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  9. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  10. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  11. A time-efficient implementation of Extended Kalman Filter for sequential orbit determination and a case study for onboard application

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Wang, Haihong; Chen, Qiuli; Chen, Zhonggui; Zheng, Jinjun; Cheng, Haowen; Liu, Lin

    2018-07-01

    Onboard orbit determination (OD) is often used in space missions, with which mission support can be partially accomplished autonomously, with less dependency on ground stations. In major Global Navigation Satellite Systems (GNSS), inter-satellite link is also an essential upgrade in the future generations. To serve for autonomous operation, sequential OD method is crucial to provide real-time or near real-time solutions. The Extended Kalman Filter (EKF) is an effective and convenient sequential estimator that is widely used in onboard application. The filter requires the solutions of state transition matrix (STM) and the process noise transition matrix, which are always obtained by numerical integration. However, numerically integrating the differential equations is a CPU intensive process and consumes a large portion of the time in EKF procedures. In this paper, we present an implementation that uses the analytical solutions of these transition matrices to replace the numerical calculations. This analytical implementation is demonstrated and verified using a fictitious constellation based on selected medium Earth orbit (MEO) and inclined Geosynchronous orbit (IGSO) satellites. We show that this implementation performs effectively and converges quickly, steadily and accurately in the presence of considerable errors in the initial values, measurements and force models. The filter is able to converge within 2-4 h of flight time in our simulation. The observation residual is consistent with simulated measurement error, which is about a few centimeters in our scenarios. Compared to results implemented with numerically integrated STM, the analytical implementation shows results with consistent accuracy, while it takes only about half the CPU time to filter a 10-day measurement series. The future possible extensions are also discussed to fit in various missions.

  12. Impact of He and H relative depth distributions on the result of sequential He+ and H+ ion implantation and annealing in silicon

    NASA Astrophysics Data System (ADS)

    Cherkashin, N.; Daghbouj, N.; Seine, G.; Claverie, A.

    2018-04-01

    Sequential He++H+ ion implantation, being more effective than the sole implantation of H+ or He+, is used by many to transfer thin layers of silicon onto different substrates. However, due to the poor understanding of the basic mechanisms involved in such a process, the implantation parameters to be used for the efficient delamination of a superficial layer are still subject to debate. In this work, by using various experimental techniques, we have studied the influence of the He and H relative depth-distributions imposed by the ion energies onto the result of the sequential implantation and annealing of the same fluence of He and H ions. Analyzing the characteristics of the blister populations observed after annealing and deducing the composition of the gas they contain from FEM simulations, we show that the trapping efficiency of He atoms in platelets and blisters during annealing depends on the behavior of the vacancies generated by the two implants within the H-rich region before and after annealing. Maximum efficiency of the sequential ion implantation is obtained when the H-rich region is able to trap all implanted He ions, while the vacancies it generated are not available to favor the formation of V-rich complexes after implantation then He-filled nano-bubbles after annealing. A technological option is to implant He+ ions first at such an energy that the damage it generates is located on the deeper side of the H profile.

  13. Sequential programmable self-assembly: Role of cooperative interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonathan D. Halverson; Tkachenko, Alexei V.

    Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less

  14. Sequential programmable self-assembly: Role of cooperative interactions

    DOE PAGES

    Jonathan D. Halverson; Tkachenko, Alexei V.

    2016-03-04

    Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less

  15. Sequential growth factor application in bone marrow stromal cell ligament engineering.

    PubMed

    Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H

    2005-01-01

    In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.

  16. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  17. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    PubMed Central

    Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.

    2017-01-01

    Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187

  18. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  19. Students' conceptual performance on synthesis physics problems with varying mathematical complexity

    NASA Astrophysics Data System (ADS)

    Ibrahim, Bashirah; Ding, Lin; Heckler, Andrew F.; White, Daniel R.; Badeau, Ryan

    2017-06-01

    A body of research on physics problem solving has focused on single-concept problems. In this study we use "synthesis problems" that involve multiple concepts typically taught in different chapters. We use two types of synthesis problems, sequential and simultaneous synthesis tasks. Sequential problems require a consecutive application of fundamental principles, and simultaneous problems require a concurrent application of pertinent concepts. We explore students' conceptual performance when they solve quantitative synthesis problems with varying mathematical complexity. Conceptual performance refers to the identification, follow-up, and correct application of the pertinent concepts. Mathematical complexity is determined by the type and the number of equations to be manipulated concurrently due to the number of unknowns in each equation. Data were collected from written tasks and individual interviews administered to physics major students (N =179 ) enrolled in a second year mechanics course. The results indicate that mathematical complexity does not impact students' conceptual performance on the sequential tasks. In contrast, for the simultaneous problems, mathematical complexity negatively influences the students' conceptual performance. This difference may be explained by the students' familiarity with and confidence in particular concepts coupled with cognitive load associated with manipulating complex quantitative equations. Another explanation pertains to the type of synthesis problems, either sequential or simultaneous task. The students split the situation presented in the sequential synthesis tasks into segments but treated the situation in the simultaneous synthesis tasks as a single event.

  20. Spatial-simultaneous and spatial-sequential working memory in individuals with Down syndrome: the effect of configuration.

    PubMed

    Carretti, Barbara; Lanfranchi, Silvia; Mammarella, Irene C

    2013-01-01

    Earlier research showed that visuospatial working memory (VSWM) is better preserved in Down syndrome (DS) than verbal WM. Some differences emerged, however, when VSWM performance was broken down into its various components, and more recent studies revealed that the spatial-simultaneous component of VSWM is more impaired than the spatial-sequential one. The difficulty of managing more than one item at a time is also evident when the information to be recalled is structured. To further analyze this issue, we investigated the advantage of material being structured in spatial-simultaneous and spatial-sequential tasks by comparing the performance of a group of individuals with DS and a group of typically-developing children matched for mental age. Both groups were presented with VSWM tasks in which both the presentation format (simultaneous vs. sequential) and the type of configuration (pattern vs. random) were manipulated. Findings indicated that individuals with DS took less advantage of the pattern configuration in the spatial-simultaneous task than TD children; in contrast, the two groups' performance did not differ in the pattern configuration of the spatial-sequential task. Taken together, these results confirmed difficulties relating to the spatial-simultaneous component of VSWM in individuals with DS, supporting the importance of distinguishing between different components within this system. The findings are discussed in terms of factors influencing this specific deficit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  2. Mystery Boxes, X Rays, and Radiology.

    ERIC Educational Resources Information Center

    Thomson, Norman

    2000-01-01

    Indicates the difficulties of teaching concepts beyond light and color and creating memorable learning experiences. Recommends sequential activities using the mystery box approach to explain how scientists and doctors use photon applications. (YDS)

  3. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  4. Transaction costs and sequential bargaining in transferable discharge permit markets.

    PubMed

    Netusil, N R; Braden, J B

    2001-03-01

    Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.

  5. Sequential solvent extraction for forms of antimony in five selected coals

    USGS Publications Warehouse

    Qi, C.; Liu, Gaisheng; Kong, Y.; Chou, C.-L.; Wang, R.

    2008-01-01

    Abundance of antimony in bulk samples has been determined in five selected coals, three coals from Huaibei Coalfield, Anhui, China, and two from the Illinois Basin in the United States. The Sb abundance in these samples is in the range of 0.11-0.43 ??g/g. The forms of Sb in coals were studied by sequential solvent extraction. The six forms of Sb are water soluble, ion changeable, organic matter bound, carbonate bound, silicate bound, and sulfide bound. Results of sequential extraction show that silicate-bound Sb is the most abundant form in these coals. Silicate- plus sulfide-bound Sb accounts for more than half of the total Sb in all coals. Bituminous coals are higher in organic matterbound Sb than anthracite and natural coke, indicating that the Sb in the organic matter may be incorporated into silicate and sulfide minerals during metamorphism. ?? 2008 by The University of Chicago. All rights reserved.

  6. Comparison of precursor infiltration into polymer thin films via atomic layer deposition and sequential vapor infiltration using in-situ quartz crystal microgravimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padbury, Richard P.; Jur, Jesse S., E-mail: jsjur@ncsu.edu

    Previous research exploring inorganic materials nucleation behavior on polymers via atomic layer deposition indicates the formation of hybrid organic–inorganic materials that form within the subsurface of the polymer. This has inspired adaptations to the process, such as sequential vapor infiltration, which enhances the diffusion of organometallic precursors into the subsurface of the polymer to promote the formation of a hybrid organic–inorganic coating. This work highlights the fundamental difference in mass uptake behavior between atomic layer deposition and sequential vapor infiltration using in-situ methods. In particular, in-situ quartz crystal microgravimetry is used to compare the mass uptake behavior of trimethyl aluminummore » in poly(butylene terephthalate) and polyamide-6 polymer thin films. The importance of trimethyl aluminum diffusion into the polymer subsurface and the subsequent chemical reactions with polymer functional groups are discussed.« less

  7. The effect of sequential exposure of color conditions on time and accuracy of graphic symbol location.

    PubMed

    Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi

    2010-03-01

    An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.

  8. Systematic assessment of benefits and risks: study protocol for a multi-criteria decision analysis using the Analytic Hierarchy Process for comparative effectiveness research

    PubMed Central

    Singh, Sonal

    2013-01-01

    Background: Regulatory decision-making involves assessment of risks and benefits of medications at the time of approval or when relevant safety concerns arise with a medication. The Analytic Hierarchy Process (AHP) facilitates decision-making in complex situations involving tradeoffs by considering risks and benefits of alternatives. The AHP allows a more structured method of synthesizing and understanding evidence in the context of importance assigned to outcomes. Our objective is to evaluate the use of an AHP in a simulated committee setting selecting oral medications for type 2 diabetes.  Methods: This study protocol describes the AHP in five sequential steps using a small group of diabetes experts representing various clinical disciplines. The first step will involve defining the goal of the decision and developing the AHP model. In the next step, we will collect information about how well alternatives are expected to fulfill the decision criteria. In the third step, we will compare the ability of the alternatives to fulfill the criteria and judge the importance of eight criteria relative to the decision goal of the optimal medication choice for type 2 diabetes. We will use pairwise comparisons to sequentially compare the pairs of alternative options regarding their ability to fulfill the criteria. In the fourth step, the scales created in the third step will be combined to create a summary score indicating how well the alternatives met the decision goal. The resulting scores will be expressed as percentages and will indicate the alternative medications' relative abilities to fulfill the decision goal. The fifth step will consist of sensitivity analyses to explore the effects of changing the estimates. We will also conduct a cognitive interview and process evaluation.  Discussion: Multi-criteria decision analysis using the AHP will aid, support and enhance the ability of decision makers to make evidence-based informed decisions consistent with their values and preferences. PMID:24555077

  9. Systematic assessment of benefits and risks: study protocol for a multi-criteria decision analysis using the Analytic Hierarchy Process for comparative effectiveness research.

    PubMed

    Maruthur, Nisa M; Joy, Susan; Dolan, James; Segal, Jodi B; Shihab, Hasan M; Singh, Sonal

    2013-01-01

    Regulatory decision-making involves assessment of risks and benefits of medications at the time of approval or when relevant safety concerns arise with a medication. The Analytic Hierarchy Process (AHP) facilitates decision-making in complex situations involving tradeoffs by considering risks and benefits of alternatives. The AHP allows a more structured method of synthesizing and understanding evidence in the context of importance assigned to outcomes. Our objective is to evaluate the use of an AHP in a simulated committee setting selecting oral medications for type 2 diabetes.  This study protocol describes the AHP in five sequential steps using a small group of diabetes experts representing various clinical disciplines. The first step will involve defining the goal of the decision and developing the AHP model. In the next step, we will collect information about how well alternatives are expected to fulfill the decision criteria. In the third step, we will compare the ability of the alternatives to fulfill the criteria and judge the importance of eight criteria relative to the decision goal of the optimal medication choice for type 2 diabetes. We will use pairwise comparisons to sequentially compare the pairs of alternative options regarding their ability to fulfill the criteria. In the fourth step, the scales created in the third step will be combined to create a summary score indicating how well the alternatives met the decision goal. The resulting scores will be expressed as percentages and will indicate the alternative medications' relative abilities to fulfill the decision goal. The fifth step will consist of sensitivity analyses to explore the effects of changing the estimates. We will also conduct a cognitive interview and process evaluation.  Multi-criteria decision analysis using the AHP will aid, support and enhance the ability of decision makers to make evidence-based informed decisions consistent with their values and preferences.

  10. Two zinc-binding domains in the transporter AdcA from Streptococcus pyogenes facilitate high-affinity binding and fast transport of zinc.

    PubMed

    Cao, Kun; Li, Nan; Wang, Hongcui; Cao, Xin; He, Jiaojiao; Zhang, Bing; He, Qing-Yu; Zhang, Gong; Sun, Xuesong

    2018-04-20

    Zinc is an essential metal in bacteria. One important bacterial zinc transporter is AdcA, and most bacteria possess AdcA homologs that are single-domain small proteins due to better efficiency of protein biogenesis. However, a double-domain AdcA with two zinc-binding sites is significantly overrepresented in Streptococcus species, many of which are major human pathogens. Using molecular simulation and experimental validations of AdcA from Streptococcus pyogenes , we found here that the two AdcA domains sequentially stabilize the structure upon zinc binding, indicating an organization required for both increased zinc affinity and transfer speed. This structural organization appears to endow Streptococcus species with distinct advantages in zinc-depleted environments, which would not be achieved by each single AdcA domain alone. This enhanced zinc transport mechanism sheds light on the significance of the evolution of the AdcA domain fusion, provides new insights into double-domain transporter proteins with two binding sites for the same ion, and indicates a potential target of antimicrobial drugs against pathogenic Streptococcus species. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc.

  11. Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.

    PubMed

    Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R

    2013-12-01

    Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.

  12. Sequential Processing Deficits in Schizophrenia: Relationship to Neuropsychology and Genetics

    PubMed Central

    Hill, S. Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E.; Hochberger, William C.; Bishop, Jeffrey R.

    2014-01-01

    Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. PMID:24119464

  13. Sequential injection titration method using second-order signals: determination of acidity in plant oils and biodiesel samples.

    PubMed

    del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar

    2010-06-15

    A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.

  14. Comparison of different strategies in prenatal screening for Down's syndrome: cost effectiveness analysis of computer simulation.

    PubMed

    Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François

    2009-02-13

    To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100,000 pregnancies). Contingent screening, with a first trimester cut-off value for high risk of 1 in 9, is the preferred option for prenatal screening of women for pregnancies affected by Down's syndrome.

  15. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  16. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  17. Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results

    NASA Astrophysics Data System (ADS)

    Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  18. Infiltration Processes and Flow Velocities Across the Landscape: When and Where is Macropore Flow Relevant?

    NASA Astrophysics Data System (ADS)

    Demand, D.; Blume, T.; Weiler, M.

    2017-12-01

    Preferential flow in macropores significantly affects the distributions of water and solutes in soil and many studies showed its relevance worldwide. Although some models include this process as a second pore domain, little is known about the spatial patterns and temporal dynamics. For example, while flow in the matrix is usually modeled and parameterized based on soil texture, an influence of texture on non-capillary flow for a given land-use class is poorly understood. To investigate the temporal and spatial dynamics on preferential flow we used a four-year soil moisture dataset from the mesoscale Attert catchment (288 km²) in Luxembourg. This dataset contains time series from 126 soil profiles in different textures and two land-use classes (forest, grassland). The soil moisture probes were installed in 10, 30 and 50 cm depth and measured in a 5-minute temporal resolution. Events were defined by a soil moisture increase higher than the instrument noise after a precipitation sum of more than 1 mm. Precipitation was measured next to the profiles so that each location could be associated to its unique precipitation characteristics. For every event and profile the soil moisture reaction was classified in sequential (ordered by depth) and non-sequential response. A non-sequential soil moisture reaction was used as an indicator of preferential flow. For sequential flow, the velocity was determined by the first reaction between two vertically adjacent sensors. The sensor reaction and wetting front velocity was analyzed in the context of precipitation characteristics and initial soil water content. Grassland sites showed a lower proportion of non-sequential flow than forest sites. For forest, non-sequential response is dependent on texture, rainfall intensity and initial water content. This is less distinct for the grassland sites. Furthermore, sequential reactions show higher flow velocities at sites, which also have high percentage of non-sequential response. In contrast, grassland sites show a more homogenous wetting front independent of soil texture. Compared against common modelling approaches of soil water flow, measured velocities show clear evidence of preferential flow, especially for forest soils. The analysis also shows that vegetation can alter the soil properties above the textural properties alone.

  19. Evaluating the parent-adolescent communication toolkit: Usability and preliminary content effectiveness of an online intervention.

    PubMed

    Toombs, Elaine; Unruh, Anita; McGrath, Patrick

    2018-01-01

    This study aimed to assess the Parent-Adolescent Communication Toolkit, an online intervention designed to help improve parent communication with their adolescents. Participant preferences for two module delivery systems (sequential and unrestricted module access) were identified. Usability assessment of the PACT intervention was completed using pre-test and posttest comparisons. Usability data, including participant completion and satisfaction ratings were examined. Parents ( N  =   18) of adolescents were randomized to a sequential or unrestricted chapter access group. Parent participants completed pre-test measures, the PACT intervention and posttest measures. Participants provided feedback for the intervention to improve modules and provided usability ratings. Adolescent pre- and posttest ratings were evaluated. Usability ratings were high and parent feedback was positive. The sequential module access groups rated the intervention content higher and completed more content than the unrestricted chapter access group, indicating support for the sequential access design. Parent mean posttest communication scores were significantly higher ( p  <   .05) than pre-test scores. No significant differences were detected for adolescent participants. Findings suggest that the Parent-Adolescent Communication Toolkit has potential to improve parent-adolescent communication but further effectiveness assessment is required.

  20. What model resolution is required in climatological downscaling over complex terrain?

    NASA Astrophysics Data System (ADS)

    El-Samra, Renalda; Bou-Zeid, Elie; El-Fadel, Mutasem

    2018-05-01

    This study presents results from the Weather Research and Forecasting (WRF) model applied for climatological downscaling simulations over highly complex terrain along the Eastern Mediterranean. We sequentially downscale general circulation model results, for a mild and wet year (2003) and a hot and dry year (2010), to three local horizontal resolutions of 9, 3 and 1 km. Simulated near-surface hydrometeorological variables are compared at different time scales against data from an observational network over the study area comprising rain gauges, anemometers, and thermometers. The overall performance of WRF at 1 and 3 km horizontal resolution was satisfactory, with significant improvement over the 9 km downscaling simulation. The total yearly precipitation from WRF's 1 km and 3 km domains exhibited < 10% bias with respect to observational data. The errors in minimum and maximum temperatures were reduced by the downscaling, along with a high-quality delineation of temperature variability and extremes for both the 1 and 3 km resolution runs. Wind speeds, on the other hand, are generally overestimated for all model resolutions, in comparison with observational data, particularly on the coast (up to 50%) compared to inland stations (up to 40%). The findings therefore indicate that a 3 km resolution is sufficient for the downscaling, especially that it would allow more years and scenarios to be investigated compared to the higher 1 km resolution at the same computational effort. In addition, the results provide a quantitative measure of the potential errors for various hydrometeorological variables.

  1. Transport of U(VI) through sediments amended with phosphate to induce in situ uranium immobilization.

    PubMed

    Mehta, Vrajesh S; Maillot, Fabien; Wang, Zheming; Catalano, Jeffrey G; Giammar, Daniel E

    2015-02-01

    Phosphate amendments can be added to U(VI)-contaminated subsurface environments to promote in situ remediation. The primary objective of this study was to evaluate the impacts of phosphate addition on the transport of U(VI) through contaminated sediments. In batch experiments using sediments (<2 mm size fraction) from a site in Rifle, Colorado, U(VI) only weakly adsorbed due to the dominance of the aqueous speciation by Ca-U(VI)-carbonate complexes. Column experiments with these sediments were performed with flow rates that correspond to a groundwater velocity of 1.1 m/day. In the absence of phosphate, the sediments took up 1.68-1.98 μg U/g of sediments when the synthetic groundwater influent contained 4 μM U(VI). When U(VI)-free influents were then introduced with and without phosphate, substantially more uranium was retained within the column when phosphate was present in the influent. Sequential extractions of sediments from the columns revealed that uranium was uniformly distributed along the length of the columns and was primarily in forms that could be extracted by ion exchange and contact with a weak acid. Laser induced fluorescence spectroscopy (LIFS) analysis along with sequential extraction results suggest adsorption as the dominant uranium uptake mechanism. The response of dissolved uranium concentrations to stopped-flow events and the comparison of experimental data with simulations from a simple reactive transport model indicated that uranium adsorption to and desorption from the sediments was not always at local equilibrium. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed

    Pressman, Alice R; Avins, Andrew L; Hubbard, Alan; Satariano, William A

    2011-07-01

    There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien-Fleming and Lan-DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. A comparison of two worlds: How does Bayes hold up to the status quo for the analysis of clinical trials?

    PubMed Central

    Pressman, Alice R.; Avins, Andrew L.; Hubbard, Alan; Satariano, William A.

    2014-01-01

    Background There is a paucity of literature comparing Bayesian analytic techniques with traditional approaches for analyzing clinical trials using real trial data. Methods We compared Bayesian and frequentist group sequential methods using data from two published clinical trials. We chose two widely accepted frequentist rules, O'Brien–Fleming and Lan–DeMets, and conjugate Bayesian priors. Using the nonparametric bootstrap, we estimated a sampling distribution of stopping times for each method. Because current practice dictates the preservation of an experiment-wise false positive rate (Type I error), we approximated these error rates for our Bayesian and frequentist analyses with the posterior probability of detecting an effect in a simulated null sample. Thus for the data-generated distribution represented by these trials, we were able to compare the relative performance of these techniques. Results No final outcomes differed from those of the original trials. However, the timing of trial termination differed substantially by method and varied by trial. For one trial, group sequential designs of either type dictated early stopping of the study. In the other, stopping times were dependent upon the choice of spending function and prior distribution. Conclusions Results indicate that trialists ought to consider Bayesian methods in addition to traditional approaches for analysis of clinical trials. Though findings from this small sample did not demonstrate either method to consistently outperform the other, they did suggest the need to replicate these comparisons using data from varied clinical trials in order to determine the conditions under which the different methods would be most efficient. PMID:21453792

  4. Effects of Phenylalanine Substitutions in Gramicidin A on the Kinetics of Channel Formation in Vesicles and Channel Structure in SDS Micelles

    PubMed Central

    Jordan, J. B.; Easton, P. L.; Hinton, J. F.

    2005-01-01

    The common occurrence of Trp residues at the aqueous-lipid interface region of transmembrane channels is thought to be indicative of its importance for insertion and stabilization of the channel in membranes. To further investigate the effects of Trp→Phe substitution on the structure and function of the gramicidin channel, four analogs of gramicidin A have been synthesized in which the tryptophan residues at positions 9, 11, 13, and 15 are sequentially replaced with phenylalanine. The three-dimensional structure of each viable analog has been determined using a combination of two-dimensional NMR techniques and distance geometry-simulated annealing structure calculations. These phenylalanine analogs adopt a homodimer motif, consisting of two β6.3 helices joined by six hydrogen bonds at their NH2-termini. The replacement of the tryptophan residues does not have a significant effect on the backbone structure of the channels when compared to native gramicidin A, and only small effects are seen on side-chain conformations. Single-channel conductance measurements have shown that the conductance and lifetime of the channels are significantly affected by the replacement of the tryptophan residues (Wallace, 2000; Becker et al., 1991). The variation in conductance appears to be caused by the sequential removal of a tryptophan dipole, thereby removing the ion-dipole interaction at the channel entrance and at the ion binding site. Channel lifetime variations appear to be related to changing side chain-lipid interactions. This is supported by data relating to transport and incorporation kinetics. PMID:15501932

  5. Effects of phenylalanine substitutions in gramicidin A on the kinetics of channel formation in vesicles and channel structure in SDS micelles.

    PubMed

    Jordan, J B; Easton, P L; Hinton, J F

    2005-01-01

    The common occurrence of Trp residues at the aqueous-lipid interface region of transmembrane channels is thought to be indicative of its importance for insertion and stabilization of the channel in membranes. To further investigate the effects of Trp-->Phe substitution on the structure and function of the gramicidin channel, four analogs of gramicidin A have been synthesized in which the tryptophan residues at positions 9, 11, 13, and 15 are sequentially replaced with phenylalanine. The three-dimensional structure of each viable analog has been determined using a combination of two-dimensional NMR techniques and distance geometry-simulated annealing structure calculations. These phenylalanine analogs adopt a homodimer motif, consisting of two beta6.3 helices joined by six hydrogen bonds at their NH2-termini. The replacement of the tryptophan residues does not have a significant effect on the backbone structure of the channels when compared to native gramicidin A, and only small effects are seen on side-chain conformations. Single-channel conductance measurements have shown that the conductance and lifetime of the channels are significantly affected by the replacement of the tryptophan residues (Wallace, 2000; Becker et al., 1991). The variation in conductance appears to be caused by the sequential removal of a tryptophan dipole, thereby removing the ion-dipole interaction at the channel entrance and at the ion binding site. Channel lifetime variations appear to be related to changing side chain-lipid interactions. This is supported by data relating to transport and incorporation kinetics.

  6. Perceptually Guided Photo Retargeting.

    PubMed

    Xia, Yingjie; Zhang, Luming; Hong, Richang; Nie, Liqiang; Yan, Yan; Shao, Ling

    2016-04-22

    We propose perceptually guided photo retargeting, which shrinks a photo by simulating a human's process of sequentially perceiving visually/semantically important regions in a photo. In particular, we first project the local features (graphlets in this paper) onto a semantic space, wherein visual cues such as global spatial layout and rough geometric context are exploited. Thereafter, a sparsity-constrained learning algorithm is derived to select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path which simulates how a human actively perceives semantics in a photo. Furthermore, we learn the prior distribution of such active graphlet paths (AGPs) from training photos that are marked as esthetically pleasing by multiple users. The learned priors enforce the corresponding AGP of a retargeted photo to be maximally similar to those from the training photos. On top of the retargeting model, we further design an online learning scheme to incrementally update the model with new photos that are esthetically pleasing. The online update module makes the algorithm less dependent on the number and contents of the initial training data. Experimental results show that: 1) the proposed AGP is over 90% consistent with human gaze shifting path, as verified by the eye-tracking data, and 2) the retargeting algorithm outperforms its competitors significantly, as AGP is more indicative of photo esthetics than conventional saliency maps.

  7. Development of a Prototype Simulation Executive with Zooming in the Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1995-01-01

    A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.

  8. Performance evaluation of the multiple root node approach to the Rete pattern matcher for production systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sohn, A.; Gaudiot, J.-L.

    1991-12-31

    Much effort has been expanded on special architectures and algorithms dedicated to efficient processing of the pattern matching step of production systems. In this paper, the authors investigate the possible improvement on the Rete pattern matcher for production systems. Inefficiencies in the Rete match algorithm have been identified, based on which they introduce a pattern matcher with multiple root nodes. A complete implementation of the multiple root node-based production system interpreter is presented to investigate its relative algorithmic behavior over the Rete-based Ops5 production system interpreter. Benchmark production system programs are executed (not simulated) on a sequential machine Sun 4/490more » by using both interpreters and various experimental results are presented. Their investigation indicates that the multiple root node-based production system interpreter would give a maximum of up to 6-fold improvement over the Lisp implementation of the Rete-based Ops5 for the match step.« less

  9. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  10. A novel design of membrane mirror with small deformation and imaging performance analysis in infrared system

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang

    2017-05-01

    A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.

  11. The effect of earthworms on the fractionation and bioavailability of heavy metals before and after soil remediation.

    PubMed

    Udovic, Metka; Lestan, Domen

    2007-07-01

    The effect of two earthworm species, Lumbricus rubellus and Eisenia fetida, on the fractionation/bioavailability of Pb and Zn before and after soil leaching with EDTA was studied. Four leaching steps with total 12.5 mmol kg(-1) EDTA removed 39.8% and 6.1% of Pb and Zn, respectively. EDTA removed Pb from all soil fractions fairly uniformly (assessed using sequential extractions). Zn was mostly present in the chemically inert residual soil fraction, which explains its poor removal. Analysis of earthworm casts and the remainder of the soil indicated that L. rubellus and E. fetida actively regulated soil pH, but did not significantly change Pb and Zn fractionation in non-remediated and remediated soil. However, the bioavailability of Pb (assessed using Ruby's physiologically based extraction test) in E. fetida casts was significantly higher than in the bulk of the soil. In remediated soil the Pb bioavailability in the simulated stomach phase increased by 5.1 times.

  12. Stable Eutectoid Transformation in Nodular Cast Iron: Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Carazo, Fernando D.; Dardati, Patricia M.; Celentano, Diego J.; Godoy, Luis A.

    2017-01-01

    This paper presents a new microstructural model of the stable eutectoid transformation in a spheroidal cast iron. The model takes into account the nucleation and growth of ferrite grains and the growth of graphite spheroids. Different laws are assumed for the growth of both phases during and below the intercritical stable eutectoid. At a microstructural level, the initial conditions for the phase transformations are obtained from the microstructural simulation of solidification of the material, which considers the divorced eutectic and the subsequent growth of graphite spheroids up to the initiation of the stable eutectoid transformation. The temperature field is obtained by solving the energy equation by means of finite elements. The microstructural (phase change) and macrostructural (energy balance) models are coupled by a sequential multiscale procedure. Experimental validation of the model is achieved by comparison with measured values of fractions and radius of 2D view of ferrite grains. Agreement with such experiments indicates that the present model is capable of predicting ferrite phase fraction and grain size with reasonable accuracy.

  13. Orbit Determination and Maneuver Detection Using Event Representation with Thrust-Fourier-Coefficients

    NASA Astrophysics Data System (ADS)

    Lubey, D.; Ko, H.; Scheeres, D.

    The classical orbit determination (OD) method of dealing with unknown maneuvers is to restart the OD process with post-maneuver observations. However, it is also possible to continue the OD process through such unknown maneuvers by representing those unknown maneuvers with an appropriate event representation. It has been shown in previous work (Ko & Scheeres, JGCD 2014) that any maneuver performed by a satellite transitioning between two arbitrary orbital states can be represented as an equivalent maneuver connecting those two states using Thrust-Fourier-Coefficients (TFCs). Event representation using TFCs rigorously provides a unique control law that can generate the desired secular behavior for a given unknown maneuver. This paper presents applications of this representation approach to orbit prediction and maneuver detection problem across unknown maneuvers. The TFCs are appended to a sequential filter as an adjoint state to compensate unknown perturbing accelerations and the modified filter estimates the satellite state and thrust coefficients by processing OD across the time of an unknown maneuver. This modified sequential filter with TFCs is capable of fitting tracking data and maintaining an OD solution in the presence of unknown maneuvers. Also, the modified filter is found effective in detecting a sudden change in TFC values which indicates a maneuver. In order to illustrate that the event representation approach with TFCs is robust and sufficiently general to be easily adjustable, different types of measurement data are processed with the filter in a realistic LEO setting. Further, cases with mis-modeling of non-gravitational force are included in our study to verify the versatility and efficiency of our presented algorithm. Simulation results show that the modified sequential filter with TFCs can detect and estimate the orbit and thrust parameters in the presence of unknown maneuvers with or without measurement data during maneuvers. With no measurement data during maneuvers, the modified filter with TFCs uses an existing pre-maneuver orbit solution to compute a post-maneuver orbit solution by forcing TFCs to compensate for an unknown maneuver. With observation data available during maneuvers, maneuver start time and stop time is determined

  14. Mechanisms and energetics of hydride dissociation reactions on surfaces of plasma-deposited silicon thin films

    NASA Astrophysics Data System (ADS)

    Singh, Tejinder; Valipa, Mayur S.; Mountziaris, T. J.; Maroudas, Dimitrios

    2007-11-01

    We report results from a detailed analysis of the fundamental silicon hydride dissociation processes on silicon surfaces and discuss their implications for the surface chemical composition of plasma-deposited hydrogenated amorphous silicon (a-Si:H) thin films. The analysis is based on a synergistic combination of first-principles density functional theory (DFT) calculations of hydride dissociation on the hydrogen-terminated Si(001)-(2×1) surface and molecular-dynamics (MD) simulations of adsorbed SiH3 radical precursor dissociation on surfaces of MD-grown a-Si :H films. Our DFT calculations reveal that, in the presence of fivefold coordinated surface Si atoms, surface trihydride species dissociate sequentially to form surface dihydrides and surface monohydrides via thermally activated pathways with reaction barriers of 0.40-0.55eV. The presence of dangling bonds (DBs) results in lowering the activation barrier for hydride dissociation to 0.15-0.20eV, but such DB-mediated reactions are infrequent. Our MD simulations on a-Si :H film growth surfaces indicate that surface hydride dissociation reactions are predominantly mediated by fivefold coordinated surface Si atoms, with resulting activation barriers of 0.35-0.50eV. The results are consistent with experimental measurements of a-Si :H film surface composition using in situ attenuated total reflection Fourier transform infrared spectroscopy, which indicate that the a-Si :H surface is predominantly covered with the higher hydrides at low temperatures, while the surface monohydride, SiH(s ), becomes increasingly more dominant as the temperature is increased.

  15. Anti-tumor activity of high-dose EGFR tyrosine kinase inhibitor and sequential docetaxel in wild type EGFR non-small cell lung cancer cell nude mouse xenografts

    PubMed Central

    Tang, Ning; Zhang, Qianqian; Fang, Shu; Han, Xiao; Wang, Zhehai

    2017-01-01

    Treatment of non-small-cell lung cancer (NSCLC) with wild-type epidermal growth factor receptor (EGFR) is still a challenge. This study explored antitumor activity of high-dose icotinib (an EGFR tyrosine kinase inhibitor) plus sequential docetaxel against wild-type EGFR NSCLC cells-generated nude mouse xenografts. Nude mice were subcutaneously injected with wild-type EGFR NSCLC A549 cells and divided into different groups for 3-week treatment. Tumor xenograft volumes were monitored and recorded, and at the end of experiments, tumor xenografts were removed for Western blot and immunohistochemical analyses. Compared to control groups (negative control, regular-dose icotinib [IcoR], high-dose icotinib [IcoH], and docetaxel [DTX]) and regular icotinib dose (60 mg/kg) with docetaxel, treatment of mice with a high-dose (1200 mg/kg) of icotinib plus sequential docetaxel for 3 weeks (IcoH-DTX) had an additive effect on suppression of tumor xenograft size and volume (P < 0.05). Icotinib-containing treatments markedly reduced phosphorylation of EGFR, mitogen activated protein kinase (MAPK), and protein kinase B (Akt), but only the high-dose icotinib-containing treatments showed an additive effect on CD34 inhibition (P < 0.05), an indication of reduced microvessel density in tumor xenografts. Moreover, high-dose icotinib plus docetaxel had a similar effect on mouse weight loss (a common way to measure adverse reactions in mice), compared to the other treatment combinations. The study indicate that the high dose of icotinib plus sequential docetaxel (IcoH-DTX) have an additive effect on suppressing the growth of wild-type EGFR NSCLC cell nude mouse xenografts, possibly through microvessel density reduction. Future clinical trials are needed to confirm the findings of this study. PMID:27852073

  16. Allocating Deceased Donor Kidneys to Candidates with High Panel-Reactive Antibodies.

    PubMed

    Gebel, Howard M; Kasiske, Bertram L; Gustafson, Sally K; Pyke, Joshua; Shteyn, Eugene; Israni, Ajay K; Bray, Robert A; Snyder, Jon J; Friedewald, John J; Segev, Dorry L

    2016-03-07

    In December of 2014, the Organ Procurement and Transplant Network implemented a new Kidney Allocation System (KAS) for deceased donor transplant, with increased priority for highly sensitized candidates (calculated panel-reactive antibody [cPRA] >99%). We used a modified version of the new KAS to address issues of access and equity for these candidates. In a simulation, 10,988 deceased donor kidneys transplanted into waitlisted recipients in 2010 were instead allocated to candidates with cPRA≥80% (n=18,004). Each candidate's unacceptable donor HLA antigens had been entered into the allocation system by the transplant center. In simulated match runs, kidneys were allocated sequentially to adult ABO identical or permissible candidates with cPRA 100%, 99%, 98%, etc. to 80%. Allocations were restricted to donor/recipient pairs with negative virtual crossmatches. The simulation indicated that 2111 of 10,988 kidneys (19.2%) would have been allocated to patients with cPRA 100% versus 74 of 10,988 (0.7%) that were actually transplanted. Of cPRA 100% candidates, 74% were predicted to be compatible with an average of six deceased donors; the remaining 26% seemed to be incompatible with every deceased donor organ that entered the system. Of kidneys actually allocated to cPRA 100% candidates in 2010, 66% (49 of 74) were six-antigen HLA matched/zero-antigen mismatched (HLA-A, -B, and -DR) with their recipients versus only 11% (237 of 2111) in the simulation. The simulation predicted that 10,356 of 14,433 (72%) candidates with cPRA 90%-100% could be allocated an organ compared with 7.3% who actually underwent transplant. Data in this simulation are consistent with early results of the new KAS; specifically, nearly 20% of deceased donor kidneys were (virtually) compatible with cPRA 100% candidates. Although most of these candidates were predicted to be compatible with multiple donors, approximately one-quarter are unlikely to receive a single offer. Copyright © 2016 by the American Society of Nephrology.

  17. Simulating Bioremediation of Chloroethenes in a Fractured Rock Aquifer.

    NASA Astrophysics Data System (ADS)

    Curtis, G. P.

    2016-12-01

    Reactive transport simulations are being conducted to synthesize the results of a field experiment on the enhanced bioremediation of chloroethenes in a heterogeneous fractured-rock aquifer near West Trenton, NJ. The aquifer consists of a sequence of dipping mudstone beds, with water-conducting bedding-plane fractures separated by low-permeability rock where transport is diffusion-limited. The enhanced bioremediation experiment was conducted by injecting emulsified vegetable oil as an electron donor (EOS™) and a microbial consortium (KB1™) that contained dehalococcoides ethenogenes into a fracture zone that had maximum trichloroethene (TCE) concentrations of 84µM. TCE was significantly biodegraded to dichloroethene, chloroethene and ethene or CO2 at the injection well and at a downgradient well. The results also show the concomitant reduction of Fe(III) and S(6) and the production of methane . The results were used to calibrate transport models for quantifying the dominant mass-removal mechanisms. A nonreactive transport model was developed to simulate advection, dispersion and matrix diffusion of bromide and deuterium tracers present in the injection solution. This calibrated model matched tracer concentrations at the injection well and a downgradient observation well and demonstrated that matrix diffusion was a dominant control on tracer transport. A reactive transport model was developed to extend the nonreactive transport model to simulate the microbially mediated sequential dechlorination reactions, reduction of Fe(III) and S(6), and methanogenesis. The reactive transport model was calibrated to concentrations of chloride, chloroethenes, pH, alkalinity, redox-sensitive species and major ions, to estimate key biogeochemical kinetic parameters. The simulation results generally match the diverse set of observations at the injection and observation wells throughout the three year experiment. In addition, the observations and model simulations indicate that a significant pool of TCE that was initially sorbed to either the fracture surfaces or in the matrix was degraded during the field experiment. The calibrated reactive transport model will be used to quantify the extent of chloroethene mass removal from a range of hypothetical aquifers.

  18. Clinical results of computerized tomography-based simulation with laser patient marking.

    PubMed

    Ragan, D P; Forman, J D; He, T; Mesina, C F

    1996-02-01

    Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.

  19. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    PubMed Central

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  20. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    PubMed

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  1. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  2. A formal language for the specification and verification of synchronous and asynchronous circuits

    NASA Technical Reports Server (NTRS)

    Russinoff, David M.

    1993-01-01

    A formal hardware description language for the intended application of verifiable asynchronous communication is described. The language is developed within the logical framework of the Nqthm system of Boyer and Moore and is based on the event-driven behavioral model of VHDL, including the basic VHDL signal propagation mechanisms, the notion of simulation deltas, and the VHDL simulation cycle. A core subset of the language corresponds closely with a subset of VHDL and is adequate for the realistic gate-level modeling of both combinational and sequential circuits. Various extensions to this subset provide means for convenient expression of behavioral circuit specifications.

  3. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  4. Simulated Space Radiation: Murine Skeletal Responses During Recovery and with Mechanical Stimulation

    NASA Technical Reports Server (NTRS)

    Shirazi-Fard, Yasaman; Zaragoza, Josergio; Schreurs, Ann-Sofie; Truong, Tiffany; Tahimic, Candice; Alwood, Joshua S.; Castillo, Alesha B.; Globus, R. K.

    2016-01-01

    Simulated space radiation at doses similar to those of solar particle events or a round-trip sojourn to Mars (1-2Gy) may cause skeletal tissue degradation and deplete stem/progenitor cell pools throughout the body. We hypothesized that simulated space radiation (SSR) causes late, time-dependent deficits in bone structure and bone cell function reflected by changes in gene expression in response to anabolic stimuli. We used a unique sequential dual ion exposure (proton and iron) for SSR to investigate time-dependence of responses in gene expression, cell function, and microarchitecture with respect to radiation and an anabolic stimulus of axial loading (AL). Male 16-wk C57BL6/J mice (n=120 total) were exposed to 0Gy (Sham, n=10), 56Fe (2Gy, positive control dose, n=10), or sequential ions for SSR (1Gy 1H/56Fe/1H, n=10) by total body irradiation (IR), and the tissues were harvested 2 or 6 mo. later. Further, to assess the response to anabolic stimuli, we subjected additional Sham-AL (n=15) and SSR-AL (n=15) groups to rest-inserted tibial axial loading (AL) starting at 1 and 5 months post-IR (-9N, 60 cycles/day, 3 days/wk, 4 wks). Exposure to 56Fe caused a significant reduction in cancellous bone volume fraction (BV/TV) compared to Sham (-34%) and SSR (-20%) in the proximal tibia metaphysis at 2-months post-IR; however BV/TV for SSR group was not different than Sham. Both 56Fe and SSR caused significant reduction in trabecular number (Tb.N) compared to Sham (-33% and -16%, respectively). Further, Tb.N for 56Fe (2Gy) was significantly lower than SSR (-21%). Ex vivo culture of marrow cells to assess growth and differentiation of osteoblast lineage cells 6 months post-IR showed that both 56Fe and SSR exposures significantly impaired colony formation compared to Sham (-66% and -54%, respectively), as well as nodule mineralization (-90% and -51%, respectively). Two-way analysis of variance showed that both mechanical loading and radiation reduced BV/TV, mechanical loading reduced trabecular thickness (Tb.Th), and radiation reduced Tb.N, at both time points. To assess acute response to mechanical stimuli, samples were harvested from a subset of Sham-AL (n=5) and SSR-AL (n=5) to measure changes in gene expression levels. Preliminary results indicate that axial loading increased expression of the antioxidant response gene Nfe2l2 and the osteoprogenitor-associated marker Runx2 in the bone marrow cells, and there was an interaction effect between axial loading and radiation at 2-months post-IR. Additional analyses of gene expression levels in the mineralized tissue are in progress. Results indicate that SSR caused persistent impairment of osteoblast colony formation and nodule mineralization 6-mo post-IR. Contrary to our hypothesis, simulated space radiation did not impair the ability of cancellous bone to respond to a mechanical anabolic stimulus, consistent with our previous findings [1]. Hence, compressive loading may be a potential countermeasure against spaceflight-induced bone loss.

  5. Simulated Space Radiation: Murine Skeletal Responses During Recovery and with Mechanical Stimulation

    NASA Technical Reports Server (NTRS)

    Shirazi-Fard, Yasaman; Zaragoza, Josergio; Schreurs, Ann-Sofie; Truong, Tiffany; Tahimic, Candice; Alwood, Joshua S.; Globus, R. K.

    2016-01-01

    Simulated space radiation at doses similar to those of solar particle events or a round-trip sojourn to Mars (1-2Gy) may cause skeletal tissue degradation and deplete stem/progenitor cell pools throughout the body. We hypothesized that simulated space radiation (SSR) causes late, time-dependent deficits in bone structure and bone cell function reflected by changes in gene expression in response to anabolic stimuli. We used a unique sequential dual ion exposure (proton and iron) for SSR to investigate time-dependence of responses in gene expression, cell function, and microarchitecture with respect to radiation and an anabolic stimulus of axial loading (AL). Male 16-wk C57BL6/J mice (n=120 total) were exposed to 0Gy (Sham, n=10), 56Fe (2Gy, positive control dose, n=10), or sequential ions for SSR (1Gy 1H/56Fe/1H, n=10) by total body irradiation (IR), and the tissues were harvested 2 or 6 mo. later. Further, to assess the response to anabolic stimuli, we subjected additional Sham-AL (n=15) and SSR-AL (n=15) groups to rest-inserted tibial axial loading (AL) starting at 1 and 5 months post-IR (-9N, 60 cycles/day, 3 days/wk, 4 wks). Exposure to 56Fe caused a significant reduction in cancellous bone volume fraction (BV/TV) compared to Sham (-34%) and SSR (-20%) in the proximal tibia metaphysis at 2-months post-IR; however BV/TV for SSR group was not different than Sham. Both 56Fe and SSR caused significant reduction in trabecular number (Tb.N) compared to Sham (-33% and -16%, respectively). Further, Tb.N for 56Fe (2Gy) was significantly lower than SSR (-21%). Ex vivo culture of marrow cells to assess growth and differentiation of osteoblast lineage cells 6 months post-IR showed that both 56Fe and SSR exposures significantly impaired colony formation compared to Sham (-66% and -54%, respectively), as well as nodule mineralization (-90% and -51%, respectively). Two-way analysis of variance showed that both mechanical loading and radiation reduced BV/TV, mechanical loading reduced trabecular thickness (Tb.Th), and radiation reduced Tb.N, at both time points. To assess acute response to mechanical stimuli, samples were harvested from a subset of Sham-AL (n=5) and SSR-AL (n=5) to measure changes in gene expression levels. Preliminary results indicate that axial loading increased expression of the antioxidant response gene Nfe2l2 and the osteoprogenitor-associated marker Runx2 in the bone marrow cells, and there was an interaction effect between axial loading and radiation at 2-months post-IR. Additional analyses of gene expression levels in the mineralized tissue are in progress. Results indicate that SSR caused persistent impairment of osteoblast colony formation and nodule mineralization 6-mo post-IR. Contrary to our hypothesis, simulated space radiation did not impair the ability of cancellous bone to respond to a mechanical anabolic stimulus, consistent with our previous findings. Hence, compressive loading may be a potential countermeasure against spaceflight-induced bone loss.

  6. Biaxially mechanical tuning of 2-D reversible and irreversible surface topologies through simultaneous and sequential wrinkling.

    PubMed

    Yin, Jie; Yagüe, Jose Luis; Boyce, Mary C; Gleason, Karen K

    2014-02-26

    Controlled buckling is a facile means of structuring surfaces. The resulting ordered wrinkling topologies provide surface properties and features desired for multifunctional applications. Here, we study the biaxially dynamic tuning of two-dimensional wrinkled micropatterns under cyclic mechanical stretching/releasing/restretching simultaneously or sequentially. A biaxially prestretched PDMS substrate is coated with a stiff polymer deposited by initiated chemical vapor deposition (iCVD). Applying a mechanical release/restretch cycle in two directions loaded simultaneously or sequentially to the wrinkled system results in a variety of dynamic and tunable wrinkled geometries, the evolution of which is investigated using in situ optical profilometry, numerical simulations, and theoretical modeling. Results show that restretching ordered herringbone micropatterns, created through sequential release of biaxial prestrain, leads to reversible and repeatable surface topography. The initial flat surface and the same wrinkled herringbone pattern are obtained alternatively after cyclic release/restretch processes, owing to the highly ordered structure leaving no avenue for trapping irregular topological regions during cycling as further evidenced by the uniformity of strains distributions and negligible residual strain. Conversely, restretching disordered labyrinth micropatterns created through simultaneous release shows an irreversible surface topology whether after sequential or simultaneous restretching due to creation of irregular surface topologies with regions of highly concentrated strain upon formation of the labyrinth which then lead to residual strains and trapped topologies upon cycling; furthermore, these trapped topologies depend upon the subsequent strain histories as well as the cycle. The disordered labyrinth pattern varies after each cyclic release/restretch process, presenting residual shallow patterns instead of achieving a flat state. The ability to dynamically tune the highly ordered herringbone patterning through mechanical stretching or other actuation makes these wrinkles excellent candidates for tunable multifunctional surfaces properties such as reflectivity, friction, anisotropic liquid flow or boundary layer control.

  7. Sequential quadratic programming-based fast path planning algorithm subject to no-fly zone constraints

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang

    2016-08-01

    Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.

  8. High energy protons generation by two sequential laser pulses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaofeng; Shen, Baifei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn; Zhang, Xiaomei, E-mail: bfshen@mail.shcnc.ac.cn, E-mail: zhxm@siom.ac.cn

    2015-04-15

    The sequential proton acceleration by two laser pulses of relativistic intensity is proposed to produce high energy protons. In the scheme, a relativistic super-Gaussian (SG) laser pulse followed by a Laguerre-Gaussian (LG) pulse irradiates dense plasma attached by underdense plasma. A proton beam is produced from the target and accelerated in the radiation pressure regime by the short SG pulse and then trapped and re-accelerated in a special bubble driven by the LG pulse in the underdense plasma. The advantages of radiation pressure acceleration and LG transverse structure are combined to achieve the effective trapping and acceleration of protons. Inmore » a two-dimensional particle-in-cell simulation, protons of 6.7 GeV are obtained from a 2 × 10{sup 22 }W/cm{sup 2} SG laser pulse and a LG pulse at a lower peak intensity.« less

  9. Virtual reality laparoscopic simulator for assessment in gynaecology.

    PubMed

    Gor, Mounna; McCloy, Rory; Stone, Robert; Smith, Anthony

    2003-02-01

    A validated virtual reality laparoscopic simulator minimally invasive surgical trainer (MIST) 2 was used to assess the psychomotor skills of 21 gynaecologists (2 consultants, 8 registrars and 11 senior house officers). Nine gynaecologists failed to complete the VR tasks at the first attempt and were excluded for sequential evaluation. Each of the remaining 12 gynaecologists were tested on MIST 2 on four occasions within four weeks. The MIST 2 simulator provided quantitative data on time to complete tasks, errors, economy of movement and economy of diathermy use--for both right and left hand performance. The results show a significant early learning curve for the majority of tasks which plateaued by the third session. This suggests a high quality surgeon-computer interface. MIST 2 provides objective assessment of laparoscopic skills in gynaecologists.

  10. Advanced Numerical Techniques of Performance Evaluation. Volume 2

    DTIC Science & Technology

    1990-06-01

    multiprocessor environment. This factor is determined by the overhead of the primitives available in the system ( semaphore , monitor , or message... semaphore , monitor , or message passing primitives ) and U the programming ability of the user who implements the simulation. " t,: the sequential...Warp Operating System . i Pro" lftevcnth ACM Symposum on Operating Systems Princlplcs, pages 77 9:3, Auslin, TX, Nov wicr 1987. ACM. [121 D.R. Jefferson

  11. A Methodology for Improving the Shipyard Planning Process: Using KVA Analysis, Risk Simulation and Strategic Real Options

    DTIC Science & Technology

    2006-09-30

    allocated to intangible assets. With Proctor & Gamble’s $53.5 billion acquisition of Gillette , $31.5 billion or 59% of the total purchase price was... outsourcing , alliances, joint ventures) • Compound Option (platform options) • Sequential Options (stage-gate development, R&D, phased...Comparisons • RO/KVA could enhance outsourcing comparisons between the Government’s Most Efficient Organization (MEO) and private-sector

  12. Optical architecture design for detection of absorbers embedded in visceral fat.

    PubMed

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-05-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector.

  13. Optical architecture design for detection of absorbers embedded in visceral fat

    PubMed Central

    Francis, Robert; Florence, James; MacFarlane, Duncan

    2014-01-01

    Optically absorbing ducts embedded in scattering adipose tissue can be injured during laparoscopic surgery. Non-sequential simulations and theoretical analysis compare optical system configurations for detecting these absorbers. For absorbers in deep scattering volumes, trans-illumination is preferred instead of diffuse reflectance. For improved contrast, a scanning source with a large area detector is preferred instead of a large area source with a pixelated detector. PMID:24877008

  14. A Sequential Monte Carlo Approach for Streamflow Forecasting

    NASA Astrophysics Data System (ADS)

    Hsu, K.; Sorooshian, S.

    2008-12-01

    As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.

  15. Evaluation of Lead Release in a Simulated Lead-Free Premise Plumbing System Using a Sequential Sampling Approach

    PubMed Central

    Ng, Ding-Quan; Lin, Yi-Pin

    2016-01-01

    In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the “lead-free” system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg·L−1, persisted for as long as five months in the system. “Lead-free” brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg·L−1, but caused “blue water” problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments. PMID:26927154

  16. Evaluation of Lead Release in a Simulated Lead-Free Premise Plumbing System Using a Sequential Sampling Approach.

    PubMed

    Ng, Ding-Quan; Lin, Yi-Pin

    2016-02-27

    In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the "lead-free" system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg · L(-1), persisted for as long as five months in the system. "Lead-free" brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg · L(-1), but caused "blue water" problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments.

  17. Speciation and transformation of heavy metals during vermicomposting of animal manure.

    PubMed

    Lv, Baoyi; Xing, Meiyan; Yang, Jian

    2016-06-01

    This work was conducted to evaluate the effects of vermicomposting on the speciation and mobility of heavy metals (Zn, Pb, Cr, and Cu) in cattle dung (CD) and pig manure (PM) using tessier sequential extraction method. Results showed that the pH, total organic carbon and C/N ratio were reduced, while the electric conductivity and humic acid increased after 90days vermicomposting. Moreover, the addition of earthworm could accelerate organic stabilization in vermicomposting. The total heavy metals in final vermicompost from CD and PM were higher than the initial values and the control without worms. Sequential extraction indicated that vermicomposting decreased the migration and availability of heavy metals, and the earthworm could reduce the mobile fraction, while increase the stable fraction of heavy metals. Furthermore, these results indicated that vermicomposting played a positive role in stabilizing heavy metals in the treatment of animal manure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Peter E; Wang, Weile; Law, Beverly E.

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less

  19. Reliability based design optimization: Formulations and methodologies

    NASA Astrophysics Data System (ADS)

    Agarwal, Harish

    Modern products ranging from simple components to complex systems should be designed to be optimal and reliable. The challenge of modern engineering is to ensure that manufacturing costs are reduced and design cycle times are minimized while achieving requirements for performance and reliability. If the market for the product is competitive, improved quality and reliability can generate very strong competitive advantages. Simulation based design plays an important role in designing almost any kind of automotive, aerospace, and consumer products under these competitive conditions. Single discipline simulations used for analysis are being coupled together to create complex coupled simulation tools. This investigation focuses on the development of efficient and robust methodologies for reliability based design optimization in a simulation based design environment. Original contributions of this research are the development of a novel efficient and robust unilevel methodology for reliability based design optimization, the development of an innovative decoupled reliability based design optimization methodology, the application of homotopy techniques in unilevel reliability based design optimization methodology, and the development of a new framework for reliability based design optimization under epistemic uncertainty. The unilevel methodology for reliability based design optimization is shown to be mathematically equivalent to the traditional nested formulation. Numerical test problems show that the unilevel methodology can reduce computational cost by at least 50% as compared to the nested approach. The decoupled reliability based design optimization methodology is an approximate technique to obtain consistent reliable designs at lesser computational expense. Test problems show that the methodology is computationally efficient compared to the nested approach. A framework for performing reliability based design optimization under epistemic uncertainty is also developed. A trust region managed sequential approximate optimization methodology is employed for this purpose. Results from numerical test studies indicate that the methodology can be used for performing design optimization under severe uncertainty.

  20. Biodegradation of PAHs and PCBs in soils and sludges

    USGS Publications Warehouse

    Liu, L.; Tindall, J.A.; Friedel, M.J.

    2007-01-01

    Results from a multi-year, pilot-scale land treatment project for PAHs and PCBs biodegradation were evaluated. A mathematical model, capable of describing sorption, sequestration, and biodegradation in soil/water systems, is applied to interpret the efficacy of a sequential active-passive biotreatment process of organic chemicals on remediation sites. To account for the recalcitrance of PAHs and PCBs in soils and sludges during long-term biotreatment, this model comprises a kinetic equation for organic chemical intraparticle sequestration process. Model responses were verified by comparison to measurements of biodegradation of PAHs and PCBs in land treatment units; a favorable match was found between them. Model simulations were performed to predict on-going biodegradation behavior of PAHs and PCBs in land treatment units. Simulation results indicate that complete biostabilization will be achieved when the concentration of reversibly sorbed chemical (S RA) reduces to undetectable levels, with a certain amount of irreversibly sequestrated residual chemical (S IA) remaining within the soil particle solid phase. The residual fraction (S IA) tends to lose its original chemical and biological activity, and hence, is much less available, toxic, and mobile than the "free" compounds. Therefore, little or no PAHs and PCBs will leach from the treatment site and constitutes no threat to human health or the environment. Biotreatment of PAHs and PCBs can be terminated accordingly. Results from the pilot-scale testing data and model calculations also suggest that a significant fraction (10-30%) of high-molecular-weight PAHs and PCBs could be sequestrated and become unavailable for biodegradation. Bioavailability (large K d , i.e., slow desorption rate) is the key factor limiting the PAHs degradation. However, both bioavailability and bioactivity (K in Monod kinetics, i.e., number of microbes, nutrients, and electron acceptor, etc.) regulate PCBs biodegradation. The sequential active-passive biotreatment can be a cost-effective approach for remediation of highly hydrophobic organic contaminants. The mathematical model proposed here would be useful in the design and operation of such organic chemical biodegradation processes on remediation sites. ?? 2007 Springer Science+Business Media B.V.

  1. Building occupancy simulation and data assimilation using a graph-based agent-oriented model

    NASA Astrophysics Data System (ADS)

    Rai, Sanish; Hu, Xiaolin

    2018-07-01

    Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.

  2. A Flexible Monitoring Infrastructure for the Simulation Requests

    NASA Astrophysics Data System (ADS)

    Spinoso, V.; Missiato, M.

    2014-06-01

    Running and monitoring simulations usually involves several different aspects of the entire workflow: the configuration of the job, the site issues, the software deployment at the site, the file catalogue, the transfers of the simulated data. In addition, the final product of the simulation is often the result of several sequential steps. This project tries a different approach to monitoring the simulation requests. All the necessary data are collected from the central services which lead the submission of the requests and the data management, and stored by a backend into a NoSQL-based data cache; those data can be queried through a Web Service interface, which returns JSON responses, and allows users, sites, physics groups to easily create their own web frontend, aggregating only the needed information. As an example, it will be shown how it is possible to monitor the CMS services (ReqMgr, DAS/DBS, PhEDEx) using a central backend and multiple customized cross-language frontends.

  3. A fast and precise indoor localization algorithm based on an online sequential extreme learning machine.

    PubMed

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-15

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics.

  4. Simultaneous Versus Sequential Presentation in Testing Recognition Memory for Faces.

    PubMed

    Finley, Jason R; Roediger, Henry L; Hughes, Andrea D; Wahlheim, Christopher N; Jacoby, Larry L

    2015-01-01

    Three experiments examined the issue of whether faces could be better recognized in a simul- taneous test format (2-alternative forced choice [2AFC]) or a sequential test format (yes-no). All experiments showed that when target faces were present in the test, the simultaneous procedure led to superior performance (area under the ROC curve), whether lures were high or low in similarity to the targets. However, when a target-absent condition was used in which no lures resembled the targets but the lures were similar to each other, the simultaneous procedure yielded higher false alarm rates (Experiments 2 and 3) and worse overall performance (Experi- ment 3). This pattern persisted even when we excluded responses that participants opted to withhold rather than volunteer. We conclude that for the basic recognition procedures used in these experiments, simultaneous presentation of alternatives (2AFC) generally leads to better discriminability than does sequential presentation (yes-no) when a target is among the alterna- tives. However, our results also show that the opposite can occur when there is no target among the alternatives. An important future step is to see whether these patterns extend to more realistic eyewitness lineup procedures. The pictures used in the experiment are available online at http://www.press.uillinois.edu/journals/ajp/media/testing_recognition/.

  5. Novel Designs of Quantum Reversible Counters

    NASA Astrophysics Data System (ADS)

    Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang

    2016-11-01

    Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.

  6. Using a water-confined carbon nanotube to probe the electricity of sequential charged segments of macromolecules

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Zhao, Yan-Jiao; Huang, Ji-Ping

    2012-07-01

    The detection of macromolecular conformation is particularly important in many physical and biological applications. Here we theoretically explore a method for achieving this detection by probing the electricity of sequential charged segments of macromolecules. Our analysis is based on molecular dynamics simulations, and we investigate a single file of water molecules confined in a half-capped single-walled carbon nanotube (SWCNT) with an external electric charge of +e or -e (e is the elementary charge). The charge is located in the vicinity of the cap of the SWCNT and along the centerline of the SWCNT. We reveal the picosecond timescale for the re-orientation (namely, from one unidirectional direction to the other) of the water molecules in response to a switch in the charge signal, -e → +e or +e → -e. Our results are well understood by taking into account the electrical interactions between the water molecules and between the water molecules and the external charge. Because such signals of re-orientation can be magnified and transported according to Tu et al. [2009 Proc. Natl. Acad. Sci. USA 106 18120], it becomes possible to record fingerprints of electric signals arising from sequential charged segments of a macromolecule, which are expected to be useful for recognizing the conformations of some particular macromolecules.

  7. Sequential change detection and monitoring of temporal trends in random-effects meta-analysis.

    PubMed

    Dogo, Samson Henry; Clark, Allan; Kulinskaya, Elena

    2017-06-01

    Temporal changes in magnitude of effect sizes reported in many areas of research are a threat to the credibility of the results and conclusions of meta-analysis. Numerous sequential methods for meta-analysis have been proposed to detect changes and monitor trends in effect sizes so that meta-analysis can be updated when necessary and interpreted based on the time it was conducted. The difficulties of sequential meta-analysis under the random-effects model are caused by dependencies in increments introduced by the estimation of the heterogeneity parameter τ 2 . In this paper, we propose the use of a retrospective cumulative sum (CUSUM)-type test with bootstrap critical values. This method allows retrospective analysis of the past trajectory of cumulative effects in random-effects meta-analysis and its visualization on a chart similar to CUSUM chart. Simulation results show that the new method demonstrates good control of Type I error regardless of the number or size of the studies and the amount of heterogeneity. Application of the new method is illustrated on two examples of medical meta-analyses. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd. © 2016 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  8. Applying spatial analysis techniques to assess the suitability of multipurpose uses of spring water in the Jiaosi Hot Spring Region, Taiwan

    NASA Astrophysics Data System (ADS)

    Jang, Cheng-Shin

    2016-04-01

    The Jiaosi Hot Spring Region is located in northeastern Taiwan and is rich in geothermal springs. The geothermal development of the Jiaosi Hot Spring Region dates back to the 18th century and currently, the spring water is processed for various uses, including irrigation, aquaculture, swimming, bathing, foot spas, and recreational tourism. Because of the proximity of the Jiaosi Hot Spring Region to the metropolitan area of Taipei City, the hot spring resources in this region attract millions of tourists annually. Recently, the Taiwan government is paying more attention to surveying the spring water temperatures in the Jiaosi Hot Spring Region because of the severe spring water overexploitation, causing a significant decline in spring water temperatures. Furthermore, the temperature of spring water is a reliable indicator for exploring the occurrence and evolution of springs and strongly affects hydrochemical reactions, components, and magnitudes. The multipurpose uses of spring water can be dictated by the temperature of the water. Therefore, accurately estimating the temperature distribution of the spring water is critical in the Jiaosi Hot Spring Region to facilitate the sustainable development and management of the multipurpose uses of the hot spring resources. To evaluate the suitability of spring water for these various uses, this study spatially characterized the spring water temperatures of the Jiaosi Hot Spring Region by using ordinary kriging (OK), sequential Gaussian simulation (SGS), and geographical information system (GIS). First, variogram analyses were used to determine the spatial variability of spring water temperatures. Next, OK and SGS were adopted to model the spatial distributions and uncertainty of the spring water temperatures. Finally, the land use (i.e., agriculture, dwelling, public land, and recreation) was determined and combined with the estimated distributions of the spring water temperatures using GIS. A suitable development strategy for the multipurpose uses of spring water is proposed according to the integration of the land use and spring water temperatures. The study results indicate that OK, SGS, and GIS are capable of characterizing spring water temperatures and the suitability of multipurpose uses of spring water. SGS realizations are more robust than OK estimates for characterizing spring water temperatures. Furthermore, current land use is almost ideal in the Jiaosi Hot Spring Region according to the estimated spatial pattern of spring water temperatures. Keywords: Hot spring; Temperature; Land use; Ordinary kriging; Sequential Gaussian simulation; Geographical information system

  9. Solving constrained minimum-time robot problems using the sequential gradient restoration algorithm

    NASA Technical Reports Server (NTRS)

    Lee, Allan Y.

    1991-01-01

    Three constrained minimum-time control problems of a two-link manipulator are solved using the Sequential Gradient and Restoration Algorithm (SGRA). The inequality constraints considered are reduced via Valentine-type transformations to nondifferential path equality constraints. The SGRA is then used to solve these transformed problems with equality constraints. The results obtained indicate that at least one of the two controls is at its limits at any instant in time. The remaining control then adjusts itself so that none of the system constraints is violated. Hence, the minimum-time control is either a pure bang-bang control or a combined bang-bang/singular control.

  10. Applying the recurrence quantification analysis method for analyzing the recurrence of simulated multiple African easterly waves in 2006

    NASA Astrophysics Data System (ADS)

    Reyes, T.; Shen, B. W.; Wu, Y.; Faghih-Naini, S.; Li, J.

    2017-12-01

    In late August, 2006, six African easterly waves (AEWs) appeared sequentially over the African continent during a 30-day period. With a global model of 1/4 degree resolution, statistics of these AEWs were realistically captured. More interestingly, the formation, subsequent intensification, and movement of Hurricane Helene (2006) were simulated to a degree of satisfaction during the model integration from Day 22 to 30 (Shen et al., 2010). We then developed a parallel ensemble empirical mode decomposition method (PEEMD; Shen et al. 2012; 2017; Cheung et al. 2013) to reveal the role of downscaling processes associated with the environmental flows in determining the timing and location of Helene's formation (Wu and Shen, 2016), supporting its practical predictability at extended-range time scales. Recently, further analysis of the correlation coefficients (CCs) between the simulated temperature and reanalysis data showed that CCs are above 0.65 during the 30 day simulations but display oscillations. While high CCs are consistent with the accurate simulations of the AEWs and Hurricane Helene, oscillations may indicate the inaccurate simulations of moving speeds (i.e., an inaccurate phase) as compared to observations. The observed AEWs have comparable but slightly different periods. To quantitatively examine this space-varying feature in observations and the temporal oscillations in the CCs of the simulations, we select recurrence quantification analysis (RQA) methods and the recurrence plot (RP) in order to account for the local nature of these features. A recurrence is defined when the trajectory returns back to the neighborhood of a previously visited state. With the RQA methods, we can compute the "recurrence rate" and "determinism" present in the RP in order to reveal the degree of recurrence and determinism (or "predictability") of the recurrent solutions. To verify of our implementations in Python, we applied our methods to analyze idealized solutions (e.g., quasi-periodic solutions and limit torus) from the three-dimensional and five-dimensional dissipative or non-dissipative Lorenz Models (Shen and Faghih-Naini, 2017). Post verification, we apply the RQA methods to analyze the 30-days reanalysis and simulation data. In this talk, we will present preliminary but promising results.

  11. Modeling of the T S D E Heater Test to Investigate Crushed Salt Reconsolidation and Rock Salt Creep for the Underground Disposal of High-Level Nuclear Waste

    NASA Astrophysics Data System (ADS)

    Blanco Martin, L.; Rutqvist, J.; Birkholzer, J. T.; Wolters, R.; Lux, K. H.

    2014-12-01

    Rock salt is a potential medium for the underground disposal of nuclear waste because it has several assets, in particular its water and gas tightness in the undisturbed state, its ability to heal induced fractures and its high thermal conductivity as compared to other shallow-crustal rocks. In addition, the run-of-mine, granular salt, may be used to backfill the mined open spaces. We present simulation results associated with coupled thermal, hydraulic and mechanical processes in the TSDE (Thermal Simulation for Drift Emplacement) experiment, conducted in the Asse salt mine in Germany [1]. During this unique test, conceived to simulate reference repository conditions for spent nuclear fuel, a significant amount of data (temperature, stress changes and displacements, among others) was measured at 20 cross-sections, distributed in two drifts in which a total of six electrical heaters were emplaced. The drifts were subsequently backfilled with crushed salt. This test has been modeled in three-dimensions, using two sequential simulators for flow (mass and heat) and geomechanics, TOUGH-FLAC and FLAC-TOUGH [2]. These simulators have recently been updated to accommodate large strains and time-dependent rheology. The numerical predictions obtained by the two simulators are compared within the framework of an international benchmark exercise, and also with experimental data. Subsequently, a re-calibration of some parameters has been performed. Modeling coupled processes in saliniferous media for nuclear waste disposal is a novel approach, and in this study it has led to the determination of some creep parameters that are very difficult to assess at the laboratory-scale because they require extremely low strain rates. Moreover, the results from the benchmark are very satisfactory and validate the capabilities of the two simulators used to study coupled thermal, mechanical and hydraulic (multi-component, multi-phase) processes relative to the underground disposal of high-level nuclear waste in rock salt. References: [1] Bechthold et al., 1999. BAMBUS-I Project. Euratom, Report EUR19124-EN. [2] Blanco Martín et al., 2014. Comparison of two sequential simulators to investigate thermal-hydraulic-mechanical processes related to nuclear waste isolation in saliniferous formations. In preparation.

  12. Multilevel Mixture Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, Dong; Wang, Xiaodong; Chen, Rong

    2004-12-01

    The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.

  13. Model for wind resource analysis and for wind farm planning

    NASA Astrophysics Data System (ADS)

    Rozsavolgyi, K.

    2008-12-01

    Due to the ever increasing anthropogenic environmental pollution and the worldwide energy demand, the research and exploitation of environment-friendly renewable energy sources like wind, solar, geothermal, biomass become more and more important. During the last decade wind energy utilization has developed dynamically with big steps. Over just the past seven years, annual worldwide growth in installed wind capacity is near 30 %. Over 94 000 MW installed currently all over the world. Besides important economic incentives, the most extensive and most accurate scientific results are required in order to provide beneficial help for regional planning of wind farms to find appropriate sites for optimal exploitation of this renewable energy source. This research is on the spatial allocation of possible wind energy usage for wind farms. In order to carry this out a new model (CMPAM = Complex Multifactoral Polygenetic Adaptive Model) is being developed, which basically is a wind climate-oriented system, but other kind of factors are also considered. With this model those areas and terrains can be located where construction of large wind farms would be reasonable under the given conditions. This model consist of different sub- modules such as wind field modeling sub module (CMPAM/W) that is in high focus in this model development procedure. The wind field modeling core of CMPAM is mainly based on sGs (sequential Gaussian simulation) hence geostatistics, but atmospheric physics and GIS are used as well. For the application developed for the test area (Hungary) WAsP visualization results were used from 10 m height as input data. This data was geocorrected (GIS geometric correction) before it was used for further calculations. Using optimized variography and sequential Gaussian simulation, results were applied for the test area (Hungary) at different heights. Simulation results were produced and summarized for different heights. Furthermore an exponential regressive function describing the vertical wind profile was also established. The following altitudes were examined: 10 m, 30 m, 60 m, 80 m, 100 m, 120 m and 140 m. By the help of the complex analyses of CMPAM, where not just mere wind climatic and meteorological factors are considered, detailed results have been produced to 100 m height. Results at this altitude were analyzed and explained in a more detailed way because this altitude proved to be the first height that can ensure adequate wind speed for larger wind farms for wind energy exploitation in the test area. Keywords: wind site assessment, wind field modeling, complex modeling for planning of wind farm, sequential Gaussian simulation, GIS, wind profile

  14. Bridging the gap between computation and clinical biology: validation of cable theory in humans

    PubMed Central

    Finlay, Malcolm C.; Xu, Lei; Taggart, Peter; Hanson, Ben; Lambiase, Pier D.

    2013-01-01

    Introduction: Computerized simulations of cardiac activity have significantly contributed to our understanding of cardiac electrophysiology, but techniques of simulations based on patient-acquired data remain in their infancy. We sought to integrate data acquired from human electrophysiological studies into patient-specific models, and validated this approach by testing whether electrophysiological responses to sequential premature stimuli could be predicted in a quantitatively accurate manner. Methods: Eleven patients with structurally normal hearts underwent electrophysiological studies. Semi-automated analysis was used to reconstruct activation and repolarization dynamics for each electrode. This S2 extrastimuli data was used to inform individualized models of cardiac conduction, including a novel derivation of conduction velocity restitution. Activation dynamics of multiple premature extrastimuli were then predicted from this model and compared against measured patient data as well as data derived from the ten-Tusscher cell-ionic model. Results: Activation dynamics following a premature S3 were significantly different from those after an S2. Patient specific models demonstrated accurate prediction of the S3 activation wave, (Pearson's R2 = 0.90, median error 4%). Examination of the modeled conduction dynamics allowed inferences into the spatial dispersion of activation delay. Further validation was performed against data from the ten-Tusscher cell-ionic model, with our model accurately recapitulating predictions of repolarization times (R2 = 0.99). Conclusions: Simulations based on clinically acquired data can be used to successfully predict complex activation patterns following sequential extrastimuli. Such modeling techniques may be useful as a method of incorporation of clinical data into predictive models. PMID:24027527

  15. Modeling the effects of naturally occurring organic carbon on chlorinated ethene transport to a public supply well

    USGS Publications Warehouse

    Chapelle, Francis H.; Kauffman, Leon J.; Widdowson, Mark A.

    2013-01-01

    The vulnerability of public supply wells to chlorinated ethene (CE) contamination in part depends on the availability of naturally occurring organic carbon to consume dissolved oxygen (DO) and initiate reductive dechlorination. This was quantified by building a mass balance model of the Kirkwood-Cohansey aquifer, which is widely used for public water supply in New Jersey. This model was built by telescoping a calibrated regional three-dimensional (3D) MODFLOW model to the approximate capture zone of a single public supply well that has a history of CE contamination. This local model was then used to compute a mass balance between dissolved organic carbon (DOC), particulate organic carbon (POC), and adsorbed organic carbon (AOC) that act as electron donors and DO, CEs, ferric iron, and sulfate that act as electron acceptors (EAs) using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. SEAM3D was constrained by varying concentrations of DO and DOC entering the aquifer via recharge, varying the bioavailable fraction of POC in aquifer sediments, and comparing observed and simulated vertical concentration profiles of DO and DOC. This procedure suggests that approximately 15% of the POC present in aquifer materials is readily bioavailable. Model simulations indicate that transport of perchloroethene (PCE) and its daughter products trichloroethene (TCE), cis-dichloroethene (cis-DCE), and vinyl chloride (VC) to the public supply well is highly sensitive to the assumed bioavailable fraction of POC, concentrations of DO entering the aquifer with recharge, and the position of simulated PCE source areas in the flow field. The results are less sensitive to assumed concentrations of DOC in aquifer recharge. The mass balance approach used in this study also indicates that hydrodynamic processes such as advective mixing, dispersion, and sorption account for a significant amount of the observed natural attenuation in this system.

  16. Modeling the Effects of Naturally Occurring Organic Carbon on Chlorinated Ethene Transport to a Public Supply Well†

    PubMed Central

    Chapelle, Francis H; Kauffman, Leon J; Widdowson, Mark A

    2014-01-01

    The vulnerability of public supply wells to chlorinated ethene (CE) contamination in part depends on the availability of naturally occurring organic carbon to consume dissolved oxygen (DO) and initiate reductive dechlorination. This was quantified by building a mass balance model of the Kirkwood-Cohansey aquifer, which is widely used for public water supply in New Jersey. This model was built by telescoping a calibrated regional three-dimensional (3D) MODFLOW model to the approximate capture zone of a single public supply well that has a history of CE contamination. This local model was then used to compute a mass balance between dissolved organic carbon (DOC), particulate organic carbon (POC), and adsorbed organic carbon (AOC) that act as electron donors and DO, CEs, ferric iron, and sulfate that act as electron acceptors (EAs) using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. SEAM3D was constrained by varying concentrations of DO and DOC entering the aquifer via recharge, varying the bioavailable fraction of POC in aquifer sediments, and comparing observed and simulated vertical concentration profiles of DO and DOC. This procedure suggests that approximately 15% of the POC present in aquifer materials is readily bioavailable. Model simulations indicate that transport of perchloroethene (PCE) and its daughter products trichloroethene (TCE), cis-dichloroethene (cis-DCE), and vinyl chloride (VC) to the public supply well is highly sensitive to the assumed bioavailable fraction of POC, concentrations of DO entering the aquifer with recharge, and the position of simulated PCE source areas in the flow field. The results are less sensitive to assumed concentrations of DOC in aquifer recharge. The mass balance approach used in this study also indicates that hydrodynamic processes such as advective mixing, dispersion, and sorption account for a significant amount of the observed natural attenuation in this system. PMID:24372440

  17. Physical and Biological Carbon Isotope Fractionation in Methane During Gas-Push-Pull-Tests

    NASA Astrophysics Data System (ADS)

    Gonzalez-Gil, G.; Schroth, M. H.; Gomez, K.; Zeyer, J.

    2005-12-01

    Stable isotope analyses have become a common tool to assess microbially-mediated processes in subsurface environments. We investigated if stable carbon isotope analysis can be used as a tool to complement gas push-pull tests (GPPTs), a novel technique that was recently developed and tested for the in-situ quantification of CH4 oxidation in soils. During a GPPT a gas mixture containing CH4, O2 and nonreactive tracer gases is injected into the soil, where CH4 is oxidized by indigenous microorganisms. Thereafter, a blend of injected gas mixture and soil air is extracted from the same location, and CH4 oxidation is quantified from an analysis of extracted CH4 and tracer gases. To assess the magnitude of physical isotope fractionation due to molecular diffusion during GPPTs, we conducted laboratory experiments in the absence of microbial activity in a 1m-high, 1m-diameter tank filled with dry sand. During the GPPTs' extraction phase, the isotopic composition of methane was analyzed. Results indicated strong carbon isotope fractionation (>20 per mil) during GPPTs. To assess the combined effect of physical and biological isotope fractionation, numerical simulations of GPPTs were conducted in which microbial CH4 isotope fractionation was simulated using first-order rate constants and microbial kinetic isotope fractionation factors previously reported for methane oxidation in landfill environments. Results of these simulations indicated that for small CH4 oxidation rates, overall isotope fractionation in CH4 is dominated by physical fractionation. Conversely, for high CH4 oxidation rates, overall fractionation is dominated by biological fractionation. Thus, CH4 isotope fractionation data alone from a single GPPT cannot be used to assess microbial CH4 oxidation. However, biological fractionation may be quantified if physical fractionation due to diffusion is known. This can be achieved by conducting two sequential GPPTs, with microbial activity being inhibited in the second test.

  18. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE PAGES

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...

    2015-11-27

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  19. Prediction of rat protein subcellular localization with pseudo amino acid composition based on multiple sequential features.

    PubMed

    Shi, Ruijia; Xu, Cunshuan

    2011-06-01

    The study of rat proteins is an indispensable task in experimental medicine and drug development. The function of a rat protein is closely related to its subcellular location. Based on the above concept, we construct the benchmark rat proteins dataset and develop a combined approach for predicting the subcellular localization of rat proteins. From protein primary sequence, the multiple sequential features are obtained by using of discrete Fourier analysis, position conservation scoring function and increment of diversity, and these sequential features are selected as input parameters of the support vector machine. By the jackknife test, the overall success rate of prediction is 95.6% on the rat proteins dataset. Our method are performed on the apoptosis proteins dataset and the Gram-negative bacterial proteins dataset with the jackknife test, the overall success rates are 89.9% and 96.4%, respectively. The above results indicate that our proposed method is quite promising and may play a complementary role to the existing predictors in this area.

  20. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  1. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  2. Effects of musical training on sound pattern processing in high-school students.

    PubMed

    Wang, Wenjung; Staffaroni, Laura; Reid, Errold; Steinschneider, Mitchell; Sussman, Elyse

    2009-05-01

    Recognizing melody in music involves detection of both the pitch intervals and the silence between sequentially presented sounds. This study tested the hypothesis that active musical training in adolescents facilitates the ability to passively detect sequential sound patterns compared to musically non-trained age-matched peers. Twenty adolescents, aged 15-18 years, were divided into groups according to their musical training and current experience. A fixed order tone pattern was presented at various stimulus rates while electroencephalogram was recorded. The influence of musical training on passive auditory processing of the sound patterns was assessed using components of event-related brain potentials (ERPs). The mismatch negativity (MMN) ERP component was elicited in different stimulus onset asynchrony (SOA) conditions in non-musicians than musicians, indicating that musically active adolescents were able to detect sound patterns across longer time intervals than age-matched peers. Musical training facilitates detection of auditory patterns, allowing the ability to automatically recognize sequential sound patterns over longer time periods than non-musical counterparts.

  3. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  4. A Comparison of Right Unilateral and Sequential Bilateral Repetitive Transcranial Magnetic Stimulation for Major Depression: A Naturalistic Clinical Australian Study.

    PubMed

    Galletly, Cherrie A; Carnell, Benjamin L; Clarke, Patrick; Gill, Shane

    2017-03-01

    A great deal of research has established the efficacy of repetitive transcranial magnetic stimulation (rTMS) in the treatment of depression. However, questions remain about the optimal method to deliver treatment. One area requiring consideration is the difference in efficacy between bilateral and unilateral treatment protocols. This study aimed to compare the effectiveness of sequential bilateral rTMS and right unilateral rTMS. A total of 135 patients participated in the study, receiving either bilateral rTMS (N = 57) or right unilateral rTMS (N = 78). Treatment response was assessed using the Hamilton depression rating scale. Sequential bilateral rTMS had a higher response rate than right unilateral (43.9% vs 30.8%), but this difference was not statistically significant. This was also the case for remission rates (33.3% vs 21.8%, respectively). Controlling for pretreatment severity of depression, the results did not indicate a significant difference between the protocols with regard to posttreatment Hamilton depression rating scale scores. The current study found no statistically significant differences in response and remission rates between sequential bilateral rTMS and right unilateral rTMS. Given the shorter treatment time and the greater safety and tolerability of right unilateral rTMS, this may be a better choice than bilateral treatment in clinical settings.

  5. Sequential disinfection of E. coli O157:H7 on shredded lettuce leaves by aqueous chlorine dioxide, ozonated water, and thyme essential oil

    NASA Astrophysics Data System (ADS)

    Singh, Nepal; Singh, Rakesh K.; Bhunia, Arun K.; Stroshine, Richard L.; Simon, James E.

    2001-03-01

    There have been numerous studies on effectiveness of different sanitizers for microbial inactivation. However, results obtained from different studies indicate that microorganism cannot be easily removed from fresh cut vegetables because of puncture and cut surfaces with varying surface topographies. In this study, three step disinfection approach was evaluated for inactivation of E. coli O157:H7 on shredded lettuce leaves. Sequential application of thyme oil, ozonated water, and aqueous chlorine dioxide was evaluated in which thyme oil was applied first followed by ozonated water and aqueous chlorine dioxide. Shredded lettuce leaves inoculated with cocktail culture of E. coli O157:H7 (C7927, EDL 933 and 204 P), were washed with ozonated water (15 mg/l for 10min), aqueous chlorine dioxide (10 mg/l,for 10min) and thyme oil suspension (0.1%, v/v for 5min). Washing of lettuce leaves with ozonated water, chlorine dioxide and thyme oil suspension resulted in 0.44, 1.20, and 1.46 log reduction (log10 cfu/g), respectively. However, the sequential treatment achieved approximately 3.13 log reductions (log10 cfu/g). These results demonstrate the efficacy of sequential treatments in decontaminating shredded lettuce leaves containing E. coli O157:H7.

  6. Efficacy of ε-polylysine, lauric arginate, or acidic calcium sulfate applied sequentially for Salmonella reduction on membrane filters and chicken carcasses.

    PubMed

    Benli, Hakan; Sanchez-Plata, Marcos X; Keeton, Jimmy T

    2011-05-01

    Salmonella contamination continues to be one of the major concerns for the microbiological safety of raw poultry products. Application of more than one decontamination agent as a multihurdle intervention to carcasses in a processing line might produce greater reductions than one treatment alone due to different modes of action of individual antimicrobials. In this study, all possible two-way combinations and individual applications of ε-polylysine (EPL), lauric arginate (LAE), and acidic calcium sulfate (ACS) solutions were evaluated for their effects against Salmonella enterica serovars, including Enteritidis and Typhimurium, using a sterile membrane filter model system. The combinations that provided higher Salmonella reductions were further evaluated on inoculated chicken carcasses in various concentrations applied in a sequential manner. Sequential spray applications of 300 mg of EPL per liter followed by 30% ACS and of 200 mg of LAE per liter followed by 30% ACS produced the highest Salmonella reductions on inoculated chicken carcasses, by 2.1 and 2.2 log CFU/ml, respectively. Our results indicated that these sequential spray applications of decontamination agents are effective for decreasing Salmonella contamination on poultry carcasses, but further studies are needed to determine the effectiveness of these combinations over a storage period.

  7. Reactive Transport Modeling of Induced Calcite Precipitation Reaction Fronts in Porous Media Using A Parallel, Fully Coupled, Fully Implicit Approach

    NASA Astrophysics Data System (ADS)

    Guo, L.; Huang, H.; Gaston, D.; Redden, G. D.; Fox, D. T.; Fujita, Y.

    2010-12-01

    Inducing mineral precipitation in the subsurface is one potential strategy for immobilizing trace metal and radionuclide contaminants. Generating mineral precipitates in situ can be achieved by manipulating chemical conditions, typically through injection or in situ generation of reactants. How these reactants transport, mix and react within the medium controls the spatial distribution and composition of the resulting mineral phases. Multiple processes, including fluid flow, dispersive/diffusive transport of reactants, biogeochemical reactions and changes in porosity-permeability, are tightly coupled over a number of scales. Numerical modeling can be used to investigate the nonlinear coupling effects of these processes which are quite challenging to explore experimentally. Many subsurface reactive transport simulators employ a de-coupled or operator-splitting approach where transport equations and batch chemistry reactions are solved sequentially. However, such an approach has limited applicability for biogeochemical systems with fast kinetics and strong coupling between chemical reactions and medium properties. A massively parallel, fully coupled, fully implicit Reactive Transport simulator (referred to as “RAT”) based on a parallel multi-physics object-oriented simulation framework (MOOSE) has been developed at the Idaho National Laboratory. Within this simulator, systems of transport and reaction equations can be solved simultaneously in a fully coupled, fully implicit manner using the Jacobian Free Newton-Krylov (JFNK) method with additional advanced computing capabilities such as (1) physics-based preconditioning for solution convergence acceleration, (2) massively parallel computing and scalability, and (3) adaptive mesh refinements for 2D and 3D structured and unstructured mesh. The simulator was first tested against analytical solutions, then applied to simulating induced calcium carbonate mineral precipitation in 1D columns and 2D flow cells as analogs to homogeneous and heterogeneous porous media, respectively. In 1D columns, calcium carbonate mineral precipitation was driven by urea hydrolysis catalyzed by urease enzyme, and in 2D flow cells, calcium carbonate mineral forming reactants were injected sequentially, forming migrating reaction fronts that are typically highly nonuniform. The RAT simulation results for the spatial and temporal distributions of precipitates, reaction rates and major species in the system, and also for changes in porosity and permeability, were compared to both laboratory experimental data and computational results obtained using other reactive transport simulators. The comparisons demonstrate the ability of RAT to simulate complex nonlinear systems and the advantages of fully coupled approaches, over de-coupled methods, for accurate simulation of complex, dynamic processes such as engineered mineral precipitation in subsurface environments.

  8. Accelerating simulation for the multiple-point statistics algorithm using vector quantization

    NASA Astrophysics Data System (ADS)

    Zuo, Chen; Pan, Zhibin; Liang, Hao

    2018-03-01

    Multiple-point statistics (MPS) is a prominent algorithm to simulate categorical variables based on a sequential simulation procedure. Assuming training images (TIs) as prior conceptual models, MPS extracts patterns from TIs using a template and records their occurrences in a database. However, complex patterns increase the size of the database and require considerable time to retrieve the desired elements. In order to speed up simulation and improve simulation quality over state-of-the-art MPS methods, we propose an accelerating simulation for MPS using vector quantization (VQ), called VQ-MPS. First, a variable representation is presented to make categorical variables applicable for vector quantization. Second, we adopt a tree-structured VQ to compress the database so that stationary simulations are realized. Finally, a transformed template and classified VQ are used to address nonstationarity. A two-dimensional (2D) stationary channelized reservoir image is used to validate the proposed VQ-MPS. In comparison with several existing MPS programs, our method exhibits significantly better performance in terms of computational time, pattern reproductions, and spatial uncertainty. Further demonstrations consist of a 2D four facies simulation, two 2D nonstationary channel simulations, and a three-dimensional (3D) rock simulation. The results reveal that our proposed method is also capable of solving multifacies, nonstationarity, and 3D simulations based on 2D TIs.

  9. A Guide for Collecting Seismic, Acoustic, and Magnetic Data for Multiple Uses

    DTIC Science & Technology

    1975-01-01

    time, simul- (- taneously for the analog technique or sequentially for the digital tech- nique. Both methods require that precision timing networks be...with a precise voltage proportional to the sensitivity of the magnetometer . Whenever any electronic equip- ment affecting cal.bration has to be replaced...described as precisely as possible, including (but not limited to) the following: a. Name of source b. Continuous or transient c. Distance from geophone

  10. The Modeling, Simulation and Comparison of Interconnection Networks for Parallel Processing.

    DTIC Science & Technology

    1987-12-01

    performs better at a lower hardware cost than do the single stage cube and mesh networks. As a result, the designer of a paralll pro- cessing system is...attempted, and in most cases succeeded, in designing and implementing faster. more powerful systems. Due to design innovations and technological advances...largely to the computational complexity of the algorithms executed. In the von Neumann machine, instructions must be executed in a sequential manner. Design

  11. Reaching Higher Gamma in Ultracold Neutral Plasmas Through Disorder-Induced Heating Control

    DTIC Science & Technology

    2016-06-27

    shielding ,” Phys. Rev. E 87, 033101 (2013) 4 Sequential ionization of ultracold plasma ions A simulation published in 2007 by Michael Murillo showed...AFRL-AFOSR-VA-TR-2017-0031 Reaching higher Gamma in ultracold neutral plasmas through disorder-induced heating control Scott Bergeson BRIGHAM YOUNG...TYPE Final Report 3. DATES COVERED (From - To) 01 June 2012 - 31 May 2016 4. TITLE AND SUBTITLE Reaching higher Gamma in ultracold neutral plasmas

  12. Sequence of deep-focus earthquakes beneath the Bonin Islands identified by the NIED nationwide dense seismic networks Hi-net and F-net

    NASA Astrophysics Data System (ADS)

    Takemura, Shunsuke; Saito, Tatsuhiko; Shiomi, Katsuhiko

    2017-03-01

    An M 6.8 ( Mw 6.5) deep-focus earthquake occurred beneath the Bonin Islands at 21:18 (JST) on June 23, 2015. Observed high-frequency (>1 Hz) seismograms across Japan, which contain several sets of P- and S-wave arrivals for the 10 min after the origin time, indicate that moderate-to-large earthquakes occurred sequentially around Japan. Snapshots of the seismic energy propagation illustrate that after one deep-focus earthquake occurred beneath the Sea of Japan, two deep-focus earthquakes occurred sequentially after the first ( Mw 6.5) event beneath the Bonin Islands in the next 4 min. The United States Geological Survey catalog includes three Bonin deep-focus earthquakes with similar hypocenter locations, but their estimated magnitudes are inconsistent with seismograms from across Japan. The maximum-amplitude patterns of the latter two earthquakes were similar to that of the first Bonin earthquake, which indicates similar locations and mechanisms. Furthermore, based on the ratios of the S-wave amplitudes to that of the first event, the magnitudes of the latter events are estimated as M 6.5 ± 0.02 and M 5.8 ± 0.02, respectively. Three magnitude-6-class earthquakes occurred sequentially within 4 min in the Pacific slab at 480 km depth, where complex heterogeneities exist within the slab.[Figure not available: see fulltext.

  13. Evaluation of Pollutant Leaching Potential of Coal Ashes for Recycling

    NASA Astrophysics Data System (ADS)

    Park, D.; Woo, N. C.; Kim, H.; Yoon, H.; Chung, D.

    2011-12-01

    By 2009, coal ashes produced from coal-based power plants in Korea have been reused as cement supplement materials; however, the rest is mostly disposed in landfills inside the plant properties. Continuous production of coal ashes and limited landfill sites require more recycles of coal ashes as base materials, specifically in constructions of roads and of huge industrial complex. Previous researches showed that coal ashes could contain various metals such as arsenic(As), chromium(Cr), lead(Pb), nickel(Ni), selenium(Se), etc. In this study, we collected four types of bottom ashes and two of fly ashes from four coal-based power plants. These ash samples were tested with distilled water through the column leaching process in oxidized conditions. The column test results were compared with those of total digestion, sequential extraction processes and TCLP. Concentrations of metals in outflows from columns are generally greater in fly ashes than in bottom ashes, specifically for As, Se, B, Sr and SO4. Only one fly ash (J2-F) shows high concentrations of arsenic and selenium in leachate. Sequential extraction results indicate that these metals are in readily soluble forms, such as adsorbed, carbonated, and reducible forms. Results of TCLP analysis indicate no potential contaminants leached from the ashes. In conclusion, recycling of coal combustion ashes could be encouraged with proper tests such as sequential and leaching experiments.

  14. Application of Sequential Extractions and X-ray Absorption Spectroscopy to Determine the Speciation of Chromium in Northern New Jersey Marsh Soils Developed in Chromite ore Processing Residue (COPR)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elzinga, E.; Cirmo, A

    2010-01-01

    The Cr speciation in marsh soils developed in weathering chromite ore processing residue (COPR) was characterized using sequential extractions and synchrotron microbeam and bulk X-ray absorption spectroscopic (XAS) analyses. The sequential extractions suggested substantial Cr associated with reducible and oxidizable soil components, and significant non-extractable residual Cr. Notable differences in Cr speciation estimates from three extraction schemes underscore the operationally defined nature of Cr speciation provided by these methods. Micro X-ray fluorescence maps and {mu}-XAS data indicated the presence of {micro}m-sized chromite particles scattered throughout the weathered COPR matrix. These particles derive from the original COPR material, and have relativelymore » high resistance towards weathering, and therefore persist even after prolonged leaching. Bulk XAS data further indicated Cr(III) incorporated in Fe(OH){sub 3}, and Cr(III) associated with organic matter. The low Cr contents of the weathered material (200-850 ppm) compared to unweathered COPR (20,000-60,000 ppm) point to substantial Cr leaching during COPR weathering, with partial repartitioning of released Cr into secondary Fe(OH){sub 3} phases and organics. The effects of anoxia on Cr speciation, and the potential of active COPR weathering releasing Cr(VI) deeper in the profile require further study.« less

  15. Timed sequential chemotherapy of cytoxan-refractory multiple myeloma with cytoxan and adriamycin based on induced tumor proliferation.

    PubMed

    Karp, J E; Humphrey, R L; Burke, P J

    1981-03-01

    Malignant plasma cell proliferation and induced humoral stimulatory activity (HSA) occur in vivo at a predictable time following drug administration. Sequential sera from 11 patients with poor-risk multiple myeloma (MM) undergoing treatment with Cytoxan (CY) 2400 mq/sq m were assayed for their in vitro effects on malignant bone marrow plasma cell tritiated thymidine (3HTdR) incorporation. Peak HSA was detected day 9 following CY. Sequential changes in marrow malignant plasma cell 3HTdR-labeling indices (LI) paralleled changes in serum activity, with peak LI occurring at the time of peak HS. An in vitro model of chemotherapy demonstrated that malignant plasma cell proliferation was enhanced by HSA, as determined by 3HTdR incorporation assay, 3HTdR LI, and tumor cells counts, and that stimulated plasma cells were more sensitive to cytotoxic effects of adriamycin (ADR) than were cells cultured in autologous pretreatment serum. Based on these studies, we designed a clinical trial to treat 12 CY-refractory poor-risk patients with MM in which ADR (60 mg/sq m) was administered at the time of peak HSA and residual tumor cell LI (day 9) following initial CY, 2400 mg/m (CY1ADR9). Eight of 12 (67%) responded to timed sequential chemotherapy with a greater than 50% decrement in monoclonal protein marker and a median survival projected to be greater than 8 mo duration (range 4-21+ mo). These clinical results using timed sequential CY1ADR9 compare favorably with results obtained using ADR in nonsequential chemotherapeutic regimens.

  16. Effect of sequential isoproturon pulse exposure on Scenedesmus vacuolatus.

    PubMed

    Vallotton, Nathalie; Eggen, Rik Ilda Lambertus; Chèvre, Nathalie

    2009-04-01

    Aquatic organisms are typically exposed to fluctuating concentrations of herbicides in streams. To assess the effects on algae of repeated peak exposure to the herbicide isoproturon, we subjected the alga Scenedesmus vacuolatus to two sequential pulse exposure scenarios. Effects on growth and on the inhibition of the effective quantum yield of photosystem II (PSII) were measured. In the first scenario, algae were exposed to short, 5-h pulses at high isoproturon concentrations (400 and 1000 microg/l), each followed by a recovery period of 18 h, while the second scenario consisted of 22.5-h pulses at lower concentrations (60 and 120 microg/l), alternating with short recovery periods (1.5 h). In addition, any changes in the sensitivity of the algae to isoproturon following sequential pulses were examined by determining the growth rate-EC(50) prior to and following exposure. In both exposure scenarios, we found that algal growth and its effective quantum yield were systematically inhibited during the exposures and that these effects were reversible. Sequential pulses to isoproturon could be considered a sequence of independent events. Nevertheless, a consequence of inhibited growth during the repeated exposures is the cumulative decrease in biomass production. Furthermore, in the second scenario, when the sequence of long pulses began to approach a scenario of continuous exposure, a slight increase in the tolerance of the algae to isoproturon was observed. These findings indicated that sequential pulses do affect algae during each pulse exposure, even if algae recover between the exposures. These observations could support an improved risk assessment of fluctuating exposures to reversibly acting herbicides.

  17. Empirical Identification of Hierarchies.

    ERIC Educational Resources Information Center

    McCormick, Douglas; And Others

    Outlining a cluster procedure which maximizes specific criteria while building scales from binary measures using a sequential, agglomerative, overlapping, non-hierarchic method results in indices giving truer results than exploratory facotr analyses or multidimensional scaling. In a series of eleven figures, patterns within cluster histories…

  18. The Neurodevelopmental Evaluation in a Private Pediatric Setting.

    ERIC Educational Resources Information Center

    Fomalont, Robert

    1986-01-01

    A comprehensive neurodevelopment evaluation technique known as PEERAMID is recommended for pediatricians in the evaluation of learning disabilities. This multifaceted system assesses the learning process individually, analyzing: minor neurological indicators, fine and gross motor function, language ability, temporal-sequential organization,…

  19. Sequential Infection in Ferrets with Antigenically Distinct Seasonal H1N1 Influenza Viruses Boosts Hemagglutinin Stalk-Specific Antibodies

    PubMed Central

    Kirchenbaum, Greg A.; Carter, Donald M.

    2015-01-01

    ABSTRACT Broadly reactive antibodies targeting the conserved hemagglutinin (HA) stalk region are elicited following sequential infection or vaccination with influenza viruses belonging to divergent subtypes and/or expressing antigenically distinct HA globular head domains. Here, we demonstrate, through the use of novel chimeric HA proteins and competitive binding assays, that sequential infection of ferrets with antigenically distinct seasonal H1N1 (sH1N1) influenza virus isolates induced an HA stalk-specific antibody response. Additionally, stalk-specific antibody titers were boosted following sequential infection with antigenically distinct sH1N1 isolates in spite of preexisting, cross-reactive, HA-specific antibody titers. Despite a decline in stalk-specific serum antibody titers, sequential sH1N1 influenza virus-infected ferrets were protected from challenge with a novel H1N1 influenza virus (A/California/07/2009), and these ferrets poorly transmitted the virus to naive contacts. Collectively, these findings indicate that HA stalk-specific antibodies are commonly elicited in ferrets following sequential infection with antigenically distinct sH1N1 influenza virus isolates lacking HA receptor-binding site cross-reactivity and can protect ferrets against a pathogenic novel H1N1 virus. IMPORTANCE The influenza virus hemagglutinin (HA) is a major target of the humoral immune response following infection and/or seasonal vaccination. While antibodies targeting the receptor-binding pocket of HA possess strong neutralization capacities, these antibodies are largely strain specific and do not confer protection against antigenic drift variant or novel HA subtype-expressing viruses. In contrast, antibodies targeting the conserved stalk region of HA exhibit broader reactivity among viruses within and among influenza virus subtypes. Here, we show that sequential infection of ferrets with antigenically distinct seasonal H1N1 influenza viruses boosts the antibody responses directed at the HA stalk region. Moreover, ferrets possessing HA stalk-specific antibody were protected against novel H1N1 virus infection and did not transmit the virus to naive contacts. PMID:26559834

  20. Does the process map influence the outcome of quality improvement work? A comparison of a sequential flow diagram and a hierarchical task analysis diagram.

    PubMed

    Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan

    2010-01-07

    Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.

  1. Exploring first-order phase transitions with population annealing

    NASA Astrophysics Data System (ADS)

    Barash, Lev Yu.; Weigel, Martin; Shchur, Lev N.; Janke, Wolfhard

    2017-03-01

    Population annealing is a hybrid of sequential and Markov chain Monte Carlo methods geared towards the efficient parallel simulation of systems with complex free-energy landscapes. Systems with first-order phase transitions are among the problems in computational physics that are difficult to tackle with standard methods such as local-update simulations in the canonical ensemble, for example with the Metropolis algorithm. It is hence interesting to see whether such transitions can be more easily studied using population annealing. We report here our preliminary observations from population annealing runs for the two-dimensional Potts model with q > 4, where it undergoes a first-order transition.

  2. Use of EPANET solver to manage water distribution in Smart City

    NASA Astrophysics Data System (ADS)

    Antonowicz, A.; Brodziak, R.; Bylka, J.; Mazurkiewicz, J.; Wojtecki, S.; Zakrzewski, P.

    2018-02-01

    Paper presents a method of using EPANET solver to support manage water distribution system in Smart City. The main task is to develop the application that allows remote access to the simulation model of the water distribution network developed in the EPANET environment. Application allows to perform both single and cyclic simulations with the specified step of changing the values of the selected process variables. In the paper the architecture of application was shown. The application supports the selection of the best device control algorithm using optimization methods. Optimization procedures are possible with following methods: brute force, SLSQP (Sequential Least SQuares Programming), Modified Powell Method. Article was supplemented by example of using developed computer tool.

  3. The PMHT: solutions for some of its problems

    NASA Astrophysics Data System (ADS)

    Wieneke, Monika; Koch, Wolfgang

    2007-09-01

    Tracking multiple targets in a cluttered environment is a challenging task. Probabilistic Multiple Hypothesis Tracking (PMHT) is an efficient approach for dealing with it. Essentially PMHT is based on the method of Expectation-Maximization for handling with association conflicts. Linearity in the number of targets and measurements is the main motivation for a further development and extension of this methodology. Unfortunately, compared with the Probabilistic Data Association Filter (PDAF), PMHT has not yet shown its superiority in terms of track-lost statistics. Furthermore, the problem of track extraction and deletion is apparently not yet satisfactorily solved within this framework. Four properties of PMHT are responsible for its problems in track maintenance: Non-Adaptivity, Hospitality, Narcissism and Local Maxima. 1, 2 In this work we present a solution for each of them and derive an improved PMHT by integrating the solutions into the PMHT formalism. The new PMHT is evaluated by Monte-Carlo simulations. A sequential Likelihood-Ratio (LR) test for track extraction has been developed and already integrated into the framework of traditional Bayesian Multiple Hypothesis Tracking. 3 As a multi-scan approach, also the PMHT methodology has the potential for track extraction. In this paper an analogous integration of a sequential LR test into the PMHT framework is proposed. We present an LR formula for track extraction and deletion using the PMHT update formulae. As PMHT provides all required ingredients for a sequential LR calculation, the LR is thus a by-product of the PMHT iteration process. Therefore the resulting update formula for the sequential LR test affords the development of Track-Before-Detect algorithms for PMHT. The approach is illustrated by a simple example.

  4. Using environmental tracers and transient hydraulic heads to estimate groundwater recharge and conductivity

    NASA Astrophysics Data System (ADS)

    Erdal, Daniel; Cirpka, Olaf A.

    2017-04-01

    Regional groundwater flow strongly depends on groundwater recharge and hydraulic conductivity. While conductivity is a spatially variable field, recharge can vary in both space and time. None of the two fields can be reliably observed on larger scales, and their estimation from other sparse data sets is an open topic. Further, common hydraulic-head observations may not suffice to constrain both fields simultaneously. In the current work we use the Ensemble Kalman filter to estimate spatially variable conductivity, spatiotemporally variable recharge and porosity for a synthetic phreatic aquifer. We use transient hydraulic-head and one spatially distributed set of environmental tracer observations to constrain the estimation. As environmental tracers generally reside for a long time in an aquifer, they require long simulation times and carries a long memory that makes them highly unsuitable for use in a sequential framework. Therefore, in this work we use the environmental tracer information to precondition the initial ensemble of recharge and conductivities, before starting the sequential filter. Thereby, we aim at improving the performance of the sequential filter by limiting the range of the recharge to values similar to the long-term annual recharge means and by creating an initial ensemble of conductivities that show similar pattern and values to the true field. The sequential filter is then used to further improve the parameters and to estimate the short term temporal behavior as well as the temporally evolving head field needed for short term predictions within the aquifer. For a virtual reality covering a subsection of the river Neckar it is shown that the use of environmental tracers can improve the performance of the filter. Results using the EnKF with and without this preconditioned initial ensemble are evaluated and discussed.

  5. A Monte Carlo study on the effect of the orbital bone to the radiation dose delivered to the eye lens

    NASA Astrophysics Data System (ADS)

    Stratis, Andreas; Zhang, Guozhi; Jacobs, Reinhilde; Bogaerts, Ria; Bosmans, Hilde

    2015-03-01

    The aim of this work was to investigate the influence of backscatter radiation from the orbital bone and the intraorbital fat on the eye lens dose in the dental CBCT energy range. To this end we conducted three different yet interrelated studies; A preliminary simulation study was conducted to examine the impact of a bony layer situated underneath a soft tissue layer on the amount of backscatter radiation. We compared the Percentage Depth Dose (PDD) curves in soft tissue with and without the bone layer and we estimated the depth in tissue where the decrease in backscatter caused by the presence of the bone is noticeable. In a supplementary study, an eye voxel phantom was designed with the DOSxyznrc code. Simulations were performed exposing the phantom at different x-ray energies sequentially in air, in fat tissue and in realistic anatomy with the incident beam perpendicular to the phantom. Finally, a virtual head phantom was implemented into a validated hybrid Monte Carlo (MC) framework to simulate a large Field of View protocol of a real CBCT scanner and examine the influence of scattered dose to the eye lens during the whole rotation of the paired tube-detector system. The results indicated an increase in the dose to the lens due to the fatty tissue in the surrounding anatomy. There is a noticeable dose reduction close to the bone-tissue interface which weakens with increasing distance from the interface, such that the impact of the orbital bone in the eye lens dose becomes small.

  6. Elucidation of the molecular mechanisms underlying adverse reactions associated with a kinase inhibitor using systems toxicology

    PubMed Central

    Amemiya, Takahiro; Honma, Masashi; Kariya, Yoshiaki; Ghosh, Samik; Kitano, Hiroaki; Kurachi, Yoshihisa; Fujita, Ken-ichi; Sasaki, Yasutsuna; Homma, Yukio; Abernethy, Darrel R; Kume, Haruki; Suzuki, Hiroshi

    2015-01-01

    Background/Objectives: Targeted kinase inhibitors are an important class of agents in anticancer therapeutics, but their limited tolerability hampers their clinical performance. Identification of the molecular mechanisms underlying the development of adverse reactions will be helpful in establishing a rational method for the management of clinically adverse reactions. Here, we selected sunitinib as a model and demonstrated that the molecular mechanisms underlying the adverse reactions associated with kinase inhibitors can efficiently be identified using a systems toxicological approach. Methods: First, toxicological target candidates were short-listed by comparing the human kinase occupancy profiles of sunitinib and sorafenib, and the molecular mechanisms underlying adverse reactions were predicted by sequential simulations using publicly available mathematical models. Next, to evaluate the probability of these predictions, a clinical observation study was conducted in six patients treated with sunitinib. Finally, mouse experiments were performed for detailed confirmation of the hypothesized molecular mechanisms and to evaluate the efficacy of a proposed countermeasure against adverse reactions to sunitinib. Results: In silico simulations indicated the possibility that sunitinib-mediated off-target inhibition of phosphorylase kinase leads to the generation of oxidative stress in various tissues. Clinical observations of patients and mouse experiments confirmed the validity of this prediction. The simulation further suggested that concomitant use of an antioxidant may prevent sunitinib-mediated adverse reactions, which was confirmed in mouse experiments. Conclusions: A systems toxicological approach successfully predicted the molecular mechanisms underlying clinically adverse reactions associated with sunitinib and was used to plan a rational method for the management of these adverse reactions. PMID:28725458

  7. A spatial approach to environmental risk assessment of PAH contamination.

    PubMed

    Bengtsson, Göran; Törneman, Niklas

    2009-01-01

    The extent of remediation of contaminated industrial sites depends on spatial heterogeneity of contaminant concentration and spatially explicit risk characterization. We used sequential Gaussian simulation (SGS) and indicator kriging (IK) to describe the spatial distribution of polycyclic aromatic hydrocarbons (PAHs), pH, electric conductivity, particle aggregate distribution, water holding capacity, and total organic carbon, and quantitative relations among them, in a creosote polluted soil in southern Sweden. The geostatistical analyses were combined with risk analyses, in which the total toxic equivalent concentration of the PAH mixture was calculated from the soil concentrations of individual PAHs and compared with ecotoxicological effect concentrations and regulatory threshold values in block sizes of 1.8 x 1.8 m. Most PAHs were spatially autocorrelated and appeared in several hot spots. The risk calculated by SGS was more confined to specific hot spot areas than the risk calculated by IK, and 40-50% of the site had PAH concentrations exceeding the threshold values with a probability of 80% and higher. The toxic equivalent concentration of the PAH mixture was dependent on the spatial distribution of organic carbon, showing the importance of assessing risk by a combination of measurements of PAH and organic carbon concentrations. Essentially, the same risk distribution pattern was maintained when Monte Carlo simulations were used for implementation of risk in larger (5 x 5 m), economically more feasible remediation blocks, but a smaller area became of great concern for remediation when the simulations included PAH partitioning to two separate sources, creosote and natural, of organic matter, rather than one general.

  8. Gibbs sampling on large lattice with GMRF

    NASA Astrophysics Data System (ADS)

    Marcotte, Denis; Allard, Denis

    2018-02-01

    Gibbs sampling is routinely used to sample truncated Gaussian distributions. These distributions naturally occur when associating latent Gaussian fields to category fields obtained by discrete simulation methods like multipoint, sequential indicator simulation and object-based simulation. The latent Gaussians are often used in data assimilation and history matching algorithms. When the Gibbs sampling is applied on a large lattice, the computing cost can become prohibitive. The usual practice of using local neighborhoods is unsatisfying as it can diverge and it does not reproduce exactly the desired covariance. A better approach is to use Gaussian Markov Random Fields (GMRF) which enables to compute the conditional distributions at any point without having to compute and invert the full covariance matrix. As the GMRF is locally defined, it allows simultaneous updating of all points that do not share neighbors (coding sets). We propose a new simultaneous Gibbs updating strategy on coding sets that can be efficiently computed by convolution and applied with an acceptance/rejection method in the truncated case. We study empirically the speed of convergence, the effect of choice of boundary conditions, of the correlation range and of GMRF smoothness. We show that the convergence is slower in the Gaussian case on the torus than for the finite case studied in the literature. However, in the truncated Gaussian case, we show that short scale correlation is quickly restored and the conditioning categories at each lattice point imprint the long scale correlation. Hence our approach enables to realistically apply Gibbs sampling on large 2D or 3D lattice with the desired GMRF covariance.

  9. Impact of defects on percolation in random sequential adsorption of linear k-mers on square lattices.

    PubMed

    Tarasevich, Yuri Yu; Laptev, Valeri V; Vygornitskii, Nikolai V; Lebovka, Nikolai I

    2015-01-01

    The effect of defects on the percolation of linear k-mers (particles occupying k adjacent sites) on a square lattice is studied by means of Monte Carlo simulation. The k-mers are deposited using a random sequential adsorption mechanism. Two models L(d) and K(d) are analyzed. In the L(d) model it is assumed that the initial square lattice is nonideal and some fraction of sites d is occupied by nonconducting point defects (impurities). In the K(d) model the initial square lattice is perfect. However, it is assumed that some fraction of the sites in the k-mers d consists of defects, i.e., is nonconducting. The length of the k-mers k varies from 2 to 256. Periodic boundary conditions are applied to the square lattice. The dependences of the percolation threshold concentration of the conducting sites p(c) vs the concentration of defects d are analyzed for different values of k. Above some critical concentration of defects d(m), percolation is blocked in both models, even at the jamming concentration of k-mers. For long k-mers, the values of d(m) are well fitted by the functions d(m)∝k(m)(-α)-k(-α) (α=1.28±0.01 and k(m)=5900±500) and d(m)∝log(10)(k(m)/k) (k(m)=4700±1000) for the L(d) and K(d) models, respectively. Thus, our estimation indicates that the percolation of k-mers on a square lattice is impossible even for a lattice without any defects if k⪆6×10(3).

  10. Modelling crop yield, soil organic C and P under variable long-term fertilizer management in China

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; Xu, Guang; Xu, Minggang; Balkovič, Juraj; Azevedo, Ligia B.; Skalský, Rastislav; Wang, Jinzhou; Yu, Chaoqing

    2016-04-01

    Phosphorus (P) is a major limiting nutrient for plant growth. P, as a nonrenewable resource and the controlling factor of aquatic entrophication, is critical for food security and human future, and concerns sustainable resource use and environmental impacts. It is thus essential to find an integrated and effective approach to optimize phosphorus fertilizer application in the agro-ecosystem while maintaining crop yield and minimizing environmental risk. Crop P models have been used to simulate plant-soil interactions but are rarely validated with scattered long-term fertilizer control field experiments. We employed a process-based model named Environmental Policy Integrated Climate model (EPIC) to simulate grain yield, soil organic carbon (SOC) and soil available P based upon 8 field experiments in China with 11 years dataset, representing the typical Chinese soil types and agro-ecosystems of different regions. 4 treatments, including N, P, and K fertilizer (NPK), no fertilizer (CK), N and K fertilizer (NK) and N, P, K and manure (NPKM) were measured and modelled. A series of sensitivity tests were conducted to analyze the sensitivity of grain yields and soil available P to sequential fertilizer rates in typical humid, normal and drought years. Our results indicated that the EPIC model showed a significant agreement for simulating grain yields with R2=0.72, index of agreement (d)=0.87, modeling efficiency (EF)=0.68, p<0.01 and SOC with R2=0.70, d=0.86, EF=0.59, and p<0.01. EPIC can well simulate soil available P moderately and capture the temporal changes in soil P reservoirs. Both of Crop yields and soil available were found more sensitive to the fertilizer P rates in humid than drought year and soil available P is closely linked to concentrated rainfall. This study concludes that EPIC model has great potential to simulate the P cycle in croplands in China and can explore the optimum management practices.

  11. Tested Demonstrations.

    ERIC Educational Resources Information Center

    Gilbert, George L., Ed.

    1983-01-01

    Presents background information and procedures for producing sequential color reactions. With proper selection of pH indicator dyes, most school colors can be produced after mixing colorless solutions. Procedures for producing white/purple and red/white/blue colors are outlined. Also outlines preparation of 2, 4, 2', 4',…

  12. Pros and cons of immediately sequential bilateral cataract surgery (ISBCS).

    PubMed

    Grzybowski, Andrzej; Wasinska-Borowiec, Weronika; Claoué, Charles

    2016-01-01

    Immediately sequential bilateral cataract surgery (ISBCS) is currently a "hot topic" in ophthalmology. There are well-documented advantages in terms of quicker visual rehabilitation and reduced costs. The risk of bilateral simultaneous endophthalmitis and bilateral blindness is now recognized to be minuscule with the advent of intracameral antibiotics and modern management of endophthalmitis. Refractive surprises are rare for normal eyes and with the use of optical biometry. Where a general anesthetic is indicated for cataract surgery, the risk of death from a second anesthetic is much higher than the risk of blindness. A widely recognized protocol from the International Society of Bilateral Cataract Surgeons needs to be adhered to if surgeons wish to start practicing ISBCS.

  13. Economic Analysis of an Integrated Annatto Seeds-Sugarcane Biorefinery Using Supercritical CO2 Extraction as a First Step

    PubMed Central

    Albarelli, Juliana Q.; Santos, Diego T.; Cocero, María José; Meireles, M. Angela A.

    2016-01-01

    Recently, supercritical fluid extraction (SFE) has been indicated to be utilized as part of a biorefinery, rather than as a stand-alone technology, since besides extracting added value compounds selectively it has been shown to have a positive effect on the downstream processing of biomass. To this extent, this work evaluates economically the encouraging experimental results regarding the use of SFE during annatto seeds valorization. Additionally, other features were discussed such as the benefits of enhancing the bioactive compounds concentration through physical processes and of integrating the proposed annatto seeds biorefinery to a hypothetical sugarcane biorefinery, which produces its essential inputs, e.g., CO2, ethanol, heat and electricity. For this, first, different configurations were modeled and simulated using the commercial simulator Aspen Plus® to determine the mass and energy balances. Next, each configuration was economically assessed using MATLAB. SFE proved to be decisive to the economic feasibility of the proposed annatto seeds-sugarcane biorefinery concept. SFE pretreatment associated with sequential fine particles separation process enabled higher bixin-rich extract production using low-pressure solvent extraction method employing ethanol, meanwhile tocotrienols-rich extract is obtained as a first product. Nevertheless, the economic evaluation showed that increasing tocotrienols-rich extract production has a more pronounced positive impact on the economic viability of the concept. PMID:28773616

  14. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  15. Modeling long-term trends of chlorinated ethene contamination at a public supply well

    USGS Publications Warehouse

    Chapelle, Francis H.; Kauffman, Leon J.; Widdowson, Mark A.

    2015-01-01

    A mass-balance solute-transport modeling approach was used to investigate the effects of dense nonaqueous phase liquid (DNAPL) volume, composition, and generation of daughter products on simulated and measured long-term trends of chlorinated ethene (CE) concentrations at a public supply well. The model was built by telescoping a calibrated regional three-dimensional MODFLOW model to the capture zone of a public supply well that has a history of CE contamination. The local model was then used to simulate the interactions between naturally occurring organic carbon that acts as an electron donor, and dissolved oxygen (DO), CEs, ferric iron, and sulfate that act as electron acceptors using the Sequential Electron Acceptor Model in three dimensions (SEAM3D) code. The modeling results indicate that asymmetry between rapidly rising and more gradual falling concentration trends over time suggests a DNAPL rather than a dissolved source of CEs. Peak concentrations of CEs are proportional to the volume and composition of the DNAPL source. The persistence of contamination, which can vary from a few years to centuries, is proportional to DNAPL volume, but is unaffected by DNAPL composition. These results show that monitoring CE concentrations in raw water produced by impacted public supply wells over time can provide useful information concerning the nature of contaminant sources and the likely future persistence of contamination.

  16. Sequential extraction of chromium, molybdenum, and vanadium in basic oxygen furnace slags.

    PubMed

    Spanka, Marina; Mansfeldt, Tim; Bialucha, Ruth

    2018-06-02

    Basic oxygen furnace slags (BOS) are by-products of basic oxygen steel production. Whereas the solubility of some elements from these slags has been well investigated, information about the mineralogy and related leaching, i.e., availability of the environmentally relevant elements chromium (Cr), molybdenum (Mo), and vanadium (V), is still lacking. The aim of this study was to investigate these issues with a modified, four-fraction-based, sequential extraction procedure (F1-F4), combined with X-ray diffraction, of two BOS. Extractants with increasing strength were used (F1 demineralized water, F2 CH 3 COOH + HCl, F3 Na 2 EDTA + NH 2 OH·HCl, and F4 HF + HNO 3 + H 2 O 2 ), and after each fraction, X-ray diffraction was performed. The recovery of Cr was moderate (66.5%) for one BOS, but significantly better (100.2%) for the other one. High recoveries were achieved for the other elements (Mo, 100.8-107.9% and V, 112.6-87.0%), indicating that the sequential extraction procedure was reliable when adapted to BOS. The results showed that Cr and Mo primarily occurred in F4, representing rather immobile elements under natural conditions, which were strongly bound into/onto Fe minerals (srebrodolskite, magnetite, hematite, or wustite). In contrast, V was more mobile with proportional higher findings in F2 and F3, and the X-ray diffraction results reveal that V was not solely bound into Ca minerals (larnite, hatrurite, kirschsteinite, and calcite), but also bound to Fe minerals. The results indicated that the total amount of recovery was a poor indicator of the availability of elements and did not correspond to the leaching of elements from BOS.

  17. Selenium speciation in phosphate mine soils and evaluation of a sequential extraction procedure using XAFS.

    PubMed

    Favorito, Jessica E; Luxton, Todd P; Eick, Matthew J; Grossl, Paul R

    2017-10-01

    Selenium is a trace element found in western US soils, where ingestion of Se-accumulating plants has resulted in livestock fatalities. Therefore, a reliable understanding of Se speciation and bioavailability is critical for effective mitigation. Sequential extraction procedures (SEP) are often employed to examine Se phases and speciation in contaminated soils but may be limited by experimental conditions. We examined the validity of a SEP using X-ray absorption spectroscopy (XAS) for both whole and a sequence of extracted soils. The sequence included removal of soluble, PO 4 -extractable, carbonate, amorphous Fe-oxide, crystalline Fe-oxide, organic, and residual Se forms. For whole soils, XANES analyses indicated Se(0) and Se(-II) predominated, with lower amounts of Se(IV) present, related to carbonates and Fe-oxides. Oxidized Se species were more elevated and residual/elemental Se was lower than previous SEP results from ICP-AES suggested. For soils from the SEP sequence, XANES results indicated only partial recovery of carbonate, Fe-oxide and organic Se. This suggests Se was incompletely removed during designated extractions, possibly due to lack of mineral solubilization or reagent specificity. Selenium fractions associated with Fe-oxides were reduced in amount or removed after using hydroxylamine HCl for most soils examined. XANES results indicate partial dissolution of solid-phases may occur during extraction processes. This study demonstrates why precautions should be taken to improve the validity of SEPs. Mineralogical and chemical characterizations should be completed prior to SEP implementation to identify extractable phases or mineral components that may influence extraction effectiveness. Sequential extraction procedures can be appropriately tailored for reliable quantification of speciation in contaminated soils. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Simulation of Sequential Setback and Aerodynamic Drag of Ordnance Projectiles

    DTIC Science & Technology

    1977-06-01

    PROGRAM ELEMENT. PROJECT. TASK Harry Diamond Laboratories AREA a WORK UNIT NUMBERS 2800 Powder Mill Road / Adelphi, MD 20783 11. CONTROLLING OFFICE NAME AND...ADDRESS 12 -US Army Materiel Developmentf Juno .I77 & Readiness Command 1’. NIrpr-PAGES Alexandria, VA 22333 ltr43 psI S A .1oLdlj,eonr from...Report) Approved for public release; distribution unlimited. % 17. DISTRIBUTION STATEMENT ( a # the obetract entered In 9lock 20, It difleret from RA.port) r

  19. Attractors in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Alexandre A. P.

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  20. Attractors in complex networks.

    PubMed

    Rodrigues, Alexandre A P

    2017-10-01

    In the framework of the generalized Lotka-Volterra model, solutions representing multispecies sequential competition can be predictable with high probability. In this paper, we show that it occurs because the corresponding "heteroclinic channel" forms part of an attractor. We prove that, generically, in an attracting heteroclinic network involving a finite number of hyperbolic and non-resonant saddle-equilibria whose linearization has only real eigenvalues, the connections corresponding to the most positive expanding eigenvalues form part of an attractor (observable in numerical simulations).

  1. Three-dimensional Simulation and Prediction of Solenoid Valve Failure Mechanism Based on Finite Element Model

    NASA Astrophysics Data System (ADS)

    Li, Jianfeng; Xiao, Mingqing; Liang, Yajun; Tang, Xilang; Li, Chao

    2018-01-01

    The solenoid valve is a kind of basic automation component applied widely. It’s significant to analyze and predict its degradation failure mechanism to improve the reliability of solenoid valve and do research on prolonging life. In this paper, a three-dimensional finite element analysis model of solenoid valve is established based on ANSYS Workbench software. A sequential coupling method used to calculate temperature filed and mechanical stress field of solenoid valve is put forward. The simulation result shows the sequential coupling method can calculate and analyze temperature and stress distribution of solenoid valve accurately, which has been verified through the accelerated life test. Kalman filtering algorithm is introduced to the data processing, which can effectively reduce measuring deviation and restore more accurate data information. Based on different driving current, a kind of failure mechanism which can easily cause the degradation of coils is obtained and an optimization design scheme of electro-insulating rubbers is also proposed. The high temperature generated by driving current and the thermal stress resulting from thermal expansion can easily cause the degradation of coil wires, which will decline the electrical resistance of coils and result in the eventual failure of solenoid valve. The method of finite element analysis can be applied to fault diagnosis and prognostic of various solenoid valves and improve the reliability of solenoid valve’s health management.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soria-Lara, Julio A., E-mail: julio.soria-lara@ouce.ox.ac.uk; Bertolini, Luca, E-mail: l.bertolini@uva.nl; Brömmelstroet, Marco te, E-mail: M.C.G.teBrommelstroet@uva.nl

    The integration of knowledge from stakeholders and the public at large is seen as one of the biggest process-related barriers during the scoping phase of EIA application in transport planning. While the academic literature offers abundant analyses, discussions and suggestions how to overcome this problem, the proposed solutions are yet to be adequately tested in practice. In order to address this gap, we test the effectiveness of a set of interventions and trigger mechanisms for improving different aspects of knowledge integration. The interventions are tested in an experiential study with two sequential cases, representing “close-to-real-life” conditions, in the context ofmore » two cities in Andalusia, Spain. In general terms, the participants perceived that the integration of knowledge improved during the simulation of the EIA scoping phase. Certain shortcomings were also discussed, fundamentally related to how the time spent during the scoping phase was crucial to lead an effective learning process between the involved people. The study concludes with a reflection on the effectiveness of the tested interventions according to similarities and differences obtained from the two experiential case studies, as well as with a discussion of the potential to generate new knowledge through the use of experiential studies in EIA practice. - Highlights: • It tests a set of interventions and mechanisms to improve the integration of knowledge. • The scoping phase of EIA is simulated to assess the effectiveness of interventions. • Two sequential case studies are used.« less

  3. Microstructural Characterization of the Heat-Affected Zones in Grade 92 Steel Welds: Double-Pass and Multipass Welds

    NASA Astrophysics Data System (ADS)

    Xu, X.; West, G. D.; Siefert, J. A.; Parker, J. D.; Thomson, R. C.

    2018-04-01

    The microstructure in the heat-affected zone (HAZ) of multipass welds typical of those used in power plants and made from 9 wt pct chromium martensitic Grade 92 steel is complex. Therefore, there is a need for systematic microstructural investigations to define the different regions of the microstructure across the HAZ of Grade 92 steel welds manufactured using the traditional arc welding processes in order to understand possible failure mechanisms after long-term service. In this study, the microstructure in the HAZ of an as-fabricated two-pass bead-on-plate weld on a parent metal of Grade 92 steel has been systematically investigated and compared to a complex, multipass thick section weldment using an extensive range of electron and ion-microscopy-based techniques. A dilatometer has been used to apply controlled thermal cycles to simulate the microstructures in distinctly different regions in a multipass HAZ using sequential thermal cycles. A wide range of microstructural properties in the simulated materials were characterized and compared with the experimental observations from the weld HAZ. It has been found that the microstructure in the HAZ can be categorized by a combination of sequential thermal cycles experienced by the different zones within the complex weld metal, using the terminology developed for these regions based on a simpler, single-pass bead-on-plate weld, categorized as complete transformation, partial transformation, and overtempered.

  4. A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Meldi, M.; Poux, A.

    2017-10-01

    A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.

  5. Sequential Multiple Assignment Randomized Trial (SMART) with Adaptive Randomization for Quality Improvement in Depression Treatment Program

    PubMed Central

    Chakraborty, Bibhas; Davidson, Karina W.

    2015-01-01

    Summary Implementation study is an important tool for deploying state-of-the-art treatments from clinical efficacy studies into a treatment program, with the dual goals of learning about effectiveness of the treatments and improving the quality of care for patients enrolled into the program. In this article, we deal with the design of a treatment program of dynamic treatment regimens (DTRs) for patients with depression post acute coronary syndrome. We introduce a novel adaptive randomization scheme for a sequential multiple assignment randomized trial of DTRs. Our approach adapts the randomization probabilities to favor treatment sequences having comparatively superior Q-functions used in Q-learning. The proposed approach addresses three main concerns of an implementation study: it allows incorporation of historical data or opinions, it includes randomization for learning purposes, and it aims to improve care via adaptation throughout the program. We demonstrate how to apply our method to design a depression treatment program using data from a previous study. By simulation, we illustrate that the inputs from historical data are important for the program performance measured by the expected outcomes of the enrollees, but also show that the adaptive randomization scheme is able to compensate poorly specified historical inputs by improving patient outcomes within a reasonable horizon. The simulation results also confirm that the proposed design allows efficient learning of the treatments by alleviating the curse of dimensionality. PMID:25354029

  6. Update schemes of multi-velocity floor field cellular automaton for pedestrian dynamics

    NASA Astrophysics Data System (ADS)

    Luo, Lin; Fu, Zhijian; Cheng, Han; Yang, Lizhong

    2018-02-01

    Modeling pedestrian movement is an interesting problem both in statistical physics and in computational physics. Update schemes of cellular automaton (CA) models for pedestrian dynamics govern the schedule of pedestrian movement. Usually, different update schemes make the models behave in different ways, which should be carefully recalibrated. Thus, in this paper, we investigated the influence of four different update schemes, namely parallel/synchronous scheme, random scheme, order-sequential scheme and shuffled scheme, on pedestrian dynamics. The multi-velocity floor field cellular automaton (FFCA) considering the changes of pedestrians' moving properties along walking paths and heterogeneity of pedestrians' walking abilities was used. As for parallel scheme only, the collisions detection and resolution should be considered, resulting in a great difference from any other update schemes. For pedestrian evacuation, the evacuation time is enlarged, and the difference in pedestrians' walking abilities is better reflected, under parallel scheme. In face of a bottleneck, for example a exit, using a parallel scheme leads to a longer congestion period and a more dispersive density distribution. The exit flow and the space-time distribution of density and velocity have significant discrepancies under four different update schemes when we simulate pedestrian flow with high desired velocity. Update schemes may have no influence on pedestrians in simulation to create tendency to follow others, but sequential and shuffled update scheme may enhance the effect of pedestrians' familiarity with environments.

  7. Space time modelling of air quality for environmental-risk maps: A case study in South Portugal

    NASA Astrophysics Data System (ADS)

    Soares, Amilcar; Pereira, Maria J.

    2007-10-01

    Since the 1960s, there has been a strong industrial development in the Sines area, on the southern Atlantic coast of Portugal, including the construction of an important industrial harbour and of, mainly, petrochemical and energy-related industries. These industries are, nowadays, responsible for substantial emissions of SO2, NOx, particles, VOCs and part of the ozone polluting the atmosphere. The major industries are spatially concentrated in a restricted area, very close to populated areas and natural resources such as those protected by the European Natura 2000 network. Air quality parameters are measured at the emissions' sources and at a few monitoring stations. Although air quality parameters are measured on an hourly basis, the lack of representativeness in space of these non-homogeneous phenomena makes even their representativeness in time questionable. Hence, in this study, the regional spatial dispersion of contaminants is also evaluated, using diffusive-sampler (Radiello Passive Sampler) campaigns during given periods. Diffusive samplers cover the entire space extensively, but just for a limited period of time. In the first step of this study, a space-time model of pollutants was built, based on a stochastic simulation-direct sequential simulation-with local spatial trend. The spatial dispersion of the contaminants for a given period of time-corresponding to the exposure time of the diffusive samplers-was computed by ordinary kriging. Direct sequential simulation was applied to produce equiprobable spatial maps for each day of that period, using the kriged map as a spatial trend and the daily measurements of pollutants from the monitoring stations as hard data. In the second step, the following environmental risk and costs maps were computed from the set of simulated realizations of pollutants: (i) maps of the contribution of each emission to the pollutant concentration at any spatial location; (ii) costs of badly located monitoring stations.

  8. Establishing the minimal number of virtual reality simulator training sessions necessary to develop basic laparoscopic skills competence: evaluation of the learning curve.

    PubMed

    Duarte, Ricardo Jordão; Cury, José; Oliveira, Luis Carlos Neves; Srougi, Miguel

    2013-01-01

    Medical literature is scarce on information to define a basic skills training program for laparoscopic surgery (peg and transferring, cutting, clipping). The aim of this study was to determine the minimal number of simulator sessions of basic laparoscopic tasks necessary to elaborate an optimal virtual reality training curriculum. Eleven medical students with no previous laparoscopic experience were spontaneously enrolled. They were submitted to simulator training sessions starting at level 1 (Immersion Lap VR, San Jose, CA), including sequentially camera handling, peg and transfer, clipping and cutting. Each student trained twice a week until 10 sessions were completed. The score indexes were registered and analyzed. The total of errors of the evaluation sequences (camera, peg and transfer, clipping and cutting) were computed and thereafter, they were correlated to the total of items evaluated in each step, resulting in a success percent ratio for each student for each set of each completed session. Thereafter, we computed the cumulative success rate in 10 sessions, obtaining an analysis of the learning process. By non-linear regression the learning curve was analyzed. By the non-linear regression method the learning curve was analyzed and a r2 = 0.73 (p < 0.001) was obtained, being necessary 4.26 (∼five sessions) to reach the plateau of 80% of the estimated acquired knowledge, being that 100% of the students have reached this level of skills. From the fifth session till the 10th, the gain of knowledge was not significant, although some students reached 96% of the expected improvement. This study revealed that after five simulator training sequential sessions the students' learning curve reaches a plateau. The forward sessions in the same difficult level do not promote any improvement in laparoscopic basic surgical skills, and the students should be introduced to a more difficult training tasks level.

  9. Sequential structural damage diagnosis algorithm using a change point detection method

    NASA Astrophysics Data System (ADS)

    Noh, H.; Rajagopal, R.; Kiremidjian, A. S.

    2013-11-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method. The general change point detection method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori, unless we are looking for a known specific type of damage. Therefore, we introduce an additional algorithm that estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using a set of experimental data collected from a four-story steel special moment-resisting frame and multiple sets of simulated data. Various features of different dimensions have been explored, and the algorithm was able to identify damage, particularly when it uses multidimensional damage sensitive features and lower false alarm rates, with a known post-damage feature distribution. For unknown feature distribution cases, the post-damage distribution was consistently estimated and the detection delays were only a few time steps longer than the delays from the general method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  10. Damage diagnosis algorithm using a sequential change point detection method with an unknown distribution for damage

    NASA Astrophysics Data System (ADS)

    Noh, Hae Young; Rajagopal, Ram; Kiremidjian, Anne S.

    2012-04-01

    This paper introduces a damage diagnosis algorithm for civil structures that uses a sequential change point detection method for the cases where the post-damage feature distribution is unknown a priori. This algorithm extracts features from structural vibration data using time-series analysis and then declares damage using the change point detection method. The change point detection method asymptotically minimizes detection delay for a given false alarm rate. The conventional method uses the known pre- and post-damage feature distributions to perform a sequential hypothesis test. In practice, however, the post-damage distribution is unlikely to be known a priori. Therefore, our algorithm estimates and updates this distribution as data are collected using the maximum likelihood and the Bayesian methods. We also applied an approximate method to reduce the computation load and memory requirement associated with the estimation. The algorithm is validated using multiple sets of simulated data and a set of experimental data collected from a four-story steel special moment-resisting frame. Our algorithm was able to estimate the post-damage distribution consistently and resulted in detection delays only a few seconds longer than the delays from the conventional method that assumes we know the post-damage feature distribution. We confirmed that the Bayesian method is particularly efficient in declaring damage with minimal memory requirement, but the maximum likelihood method provides an insightful heuristic approach.

  11. A Fast and Precise Indoor Localization Algorithm Based on an Online Sequential Extreme Learning Machine †

    PubMed Central

    Zou, Han; Lu, Xiaoxuan; Jiang, Hao; Xie, Lihua

    2015-01-01

    Nowadays, developing indoor positioning systems (IPSs) has become an attractive research topic due to the increasing demands on location-based service (LBS) in indoor environments. WiFi technology has been studied and explored to provide indoor positioning service for years in view of the wide deployment and availability of existing WiFi infrastructures in indoor environments. A large body of WiFi-based IPSs adopt fingerprinting approaches for localization. However, these IPSs suffer from two major problems: the intensive costs of manpower and time for offline site survey and the inflexibility to environmental dynamics. In this paper, we propose an indoor localization algorithm based on an online sequential extreme learning machine (OS-ELM) to address the above problems accordingly. The fast learning speed of OS-ELM can reduce the time and manpower costs for the offline site survey. Meanwhile, its online sequential learning ability enables the proposed localization algorithm to adapt in a timely manner to environmental dynamics. Experiments under specific environmental changes, such as variations of occupancy distribution and events of opening or closing of doors, are conducted to evaluate the performance of OS-ELM. The simulation and experimental results show that the proposed localization algorithm can provide higher localization accuracy than traditional approaches, due to its fast adaptation to various environmental dynamics. PMID:25599427

  12. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  13. A novel sequential vegetable production facility for life support system in space

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Berkovich, Yuliy A.; Liu, Hong; Fu, Yuming; Shao, Lingzhi; Erokhin, A. N.; Wang, Minjuan

    2012-07-01

    Vegetable cultivation plays a crucial role for dietary supplements and psychosocial benefits of the crew during manned space flight. The idea of onboard vegetables cultivation was generally proposed as the first step of food regeneration in life support system of space. Here a novel sequential vegetable production facility was developed, which was able to simulate microgravity conditions and carry out modularized-cultivation of leaf-vegetables. Its growth chamber (GC) had conic form and volume of 0.12 m ^{3}. Its planting surface of 0.154 m ^{2} was comprised of six ring-shaped root modules with a fibrous ion-exchange resin substrate. Root modules were fastened to a central porous tube supplying water, and moved on along with plant growth. The total illuminated crop area of 0.567 m ^{2} was provided by a combination of both red and white light emitting diodes distributed on the GC cone internal surface. In tests with a 24-hr photoperiod, the productivity of the facility at 0.3 kW for lettuce achieved 254.3 g eatable biomass per week. Compared to lettuce from market, the quality of lettuce of the facility did not change significantly during long-term cultivation. Our results demonstrate that the facility is high efficiency in vegetable production, and basically meets the application requirements of space microgravity environment. Keywords:, vegetable; modularized-cultivation; sequential production; life support system

  14. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    PubMed

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  15. How to Compress Sequential Memory Patterns into Periodic Oscillations: General Reduction Rules

    PubMed Central

    Zhang, Kechen

    2017-01-01

    A neural network with symmetric reciprocal connections always admits a Lyapunov function, whose minima correspond to the memory states stored in the network. Networks with suitable asymmetric connections can store and retrieve a sequence of memory patterns, but the dynamics of these networks cannot be characterized as readily as that of the symmetric networks due to the lack of established general methods. Here, a reduction method is developed for a class of asymmetric attractor networks that store sequences of activity patterns as associative memories, as in a Hopfield network. The method projects the original activity pattern of the network to a low-dimensional space such that sequential memory retrievals in the original network correspond to periodic oscillations in the reduced system. The reduced system is self-contained and provides quantitative information about the stability and speed of sequential memory retrievals in the original network. The time evolution of the overlaps between the network state and the stored memory patterns can also be determined from extended reduced systems. The reduction procedure can be summarized by a few reduction rules, which are applied to several network models, including coupled networks and networks with time-delayed connections, and the analytical solutions of the reduced systems are confirmed by numerical simulations of the original networks. Finally, a local learning rule that provides an approximation to the connection weights involving the pseudoinverse is also presented. PMID:24877729

  16. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    PubMed

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Implementation of digital equality comparator circuit on memristive memory crossbar array using material implication logic

    NASA Astrophysics Data System (ADS)

    Haron, Adib; Mahdzair, Fazren; Luqman, Anas; Osman, Nazmie; Junid, Syed Abdul Mutalib Al

    2018-03-01

    One of the most significant constraints of Von Neumann architecture is the limited bandwidth between memory and processor. The cost to move data back and forth between memory and processor is considerably higher than the computation in the processor itself. This architecture significantly impacts the Big Data and data-intensive application such as DNA analysis comparison which spend most of the processing time to move data. Recently, the in-memory processing concept was proposed, which is based on the capability to perform the logic operation on the physical memory structure using a crossbar topology and non-volatile resistive-switching memristor technology. This paper proposes a scheme to map digital equality comparator circuit on memristive memory crossbar array. The 2-bit, 4-bit, 8-bit, 16-bit, 32-bit, and 64-bit of equality comparator circuit are mapped on memristive memory crossbar array by using material implication logic in a sequential and parallel method. The simulation results show that, for the 64-bit word size, the parallel mapping exhibits 2.8× better performance in total execution time than sequential mapping but has a trade-off in terms of energy consumption and area utilization. Meanwhile, the total crossbar area can be reduced by 1.2× for sequential mapping and 1.5× for parallel mapping both by using the overlapping technique.

  18. Family or Future in the Academy?

    ERIC Educational Resources Information Center

    Ahmad, Seher

    2017-01-01

    This article critically reviews recent literature on the relationship between family formation and academic-career progression, emphasizing obstacles women face seeking a tenured position and beyond. Evidence indicates that the pipeline model is dominated by "ideal worker" norms. These norms impose rigid, tightly coupled, sequential,…

  19. COMOPTEVFOR Acronym and Abbreviation List (CAAL).

    DTIC Science & Technology

    1981-10-01

    KAST Kalman Automatic Sequential THA KIAS Knots Indicated Airspeed r4 .,-. , ......... :.77 -7 .. :7 LAAWC Local...Fo.Lce Issue MFON Missile Firing Oi-der Normal MFP Main Fr7ed Pump MGG ~ d1;Li Guided LGlide Bomb *MGS Motot Generator Set SMGU Midcourse. Guidance Unit Mi

  20. Using virtual worlds for role play simulation in child and adolescent psychiatry: an evaluation study

    PubMed Central

    Vallance, Aaron K.; Hemani, Ashish; Fernandez, Victoria; Livingstone, Daniel; McCusker, Kerri; Toro-Troconis, Maria

    2014-01-01

    Aims and method To develop and evaluate a novel teaching session on clinical assessment using role play simulation. Teaching and research sessions occurred sequentially in computer laboratories. Ten medical students were divided into two online small-group teaching sessions. Students role-played as clinician avatars and the teacher played a suicidal adolescent avatar. Questionnaire and focus-group methodology evaluated participants’ attitudes to the learning experience. Quantitative data were analysed using SPSS, qualitative data through nominal-group and thematic analyses. Results Participants reported improvements in psychiatric skills/knowledge, expressing less anxiety and more enjoyment than role-playing face to face. Data demonstrated a positive relationship between simulator fidelity and perceived utility. Some participants expressed concern about added value over other learning methods and non-verbal communication. Clinical implications The study shows that virtual worlds can successfully host role play simulation, valued by students as a useful learning method. The potential for distance learning would allow delivery irrespective of geographical distance and boundaries. PMID:25285217

Top