Sample records for direct sequential simulation

  1. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  2. Multiuser signal detection using sequential decoding

    NASA Astrophysics Data System (ADS)

    Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.

    1990-05-01

    The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.

  3. Modeling of a Sequential Two-Stage Combustor

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.

    2005-01-01

    A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.

  4. C-learning: A new classification framework to estimate optimal dynamic treatment regimes.

    PubMed

    Zhang, Baqun; Zhang, Min

    2017-12-11

    A dynamic treatment regime is a sequence of decision rules, each corresponding to a decision point, that determine that next treatment based on each individual's own available characteristics and treatment history up to that point. We show that identifying the optimal dynamic treatment regime can be recast as a sequential optimization problem and propose a direct sequential optimization method to estimate the optimal treatment regimes. In particular, at each decision point, the optimization is equivalent to sequentially minimizing a weighted expected misclassification error. Based on this classification perspective, we propose a powerful and flexible C-learning algorithm to learn the optimal dynamic treatment regimes backward sequentially from the last stage until the first stage. C-learning is a direct optimization method that directly targets optimizing decision rules by exploiting powerful optimization/classification techniques and it allows incorporation of patient's characteristics and treatment history to improve performance, hence enjoying advantages of both the traditional outcome regression-based methods (Q- and A-learning) and the more recent direct optimization methods. The superior performance and flexibility of the proposed methods are illustrated through extensive simulation studies. © 2017, The International Biometric Society.

  5. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  6. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  7. A path-level exact parallelization strategy for sequential simulation

    NASA Astrophysics Data System (ADS)

    Peredo, Oscar F.; Baeza, Daniel; Ortiz, Julián M.; Herrero, José R.

    2018-01-01

    Sequential Simulation is a well known method in geostatistical modelling. Following the Bayesian approach for simulation of conditionally dependent random events, Sequential Indicator Simulation (SIS) method draws simulated values for K categories (categorical case) or classes defined by K different thresholds (continuous case). Similarly, Sequential Gaussian Simulation (SGS) method draws simulated values from a multivariate Gaussian field. In this work, a path-level approach to parallelize SIS and SGS methods is presented. A first stage of re-arrangement of the simulation path is performed, followed by a second stage of parallel simulation for non-conflicting nodes. A key advantage of the proposed parallelization method is to generate identical realizations as with the original non-parallelized methods. Case studies are presented using two sequential simulation codes from GSLIB: SISIM and SGSIM. Execution time and speedup results are shown for large-scale domains, with many categories and maximum kriging neighbours in each case, achieving high speedup results in the best scenarios using 16 threads of execution in a single machine.

  8. Sequential capture of CO2 and SO2 in a pressurized TGA simulating FBC conditions.

    PubMed

    Sun, Ping; Grace, John R; Lim, C Jim; Anthony, Edward J

    2007-04-15

    Four FBC-based processes were investigated as possible means of sequentially capturing SO2 and CO2. Sorbent performance is the key to their technical feasibility. Two sorbents (a limestone and a dolomite) were tested in a pressurized thermogravimetric analyzer (PTGA). The sorbent behaviors were explained based on complex interaction between carbonation, sulfation, and direct sulfation. The best option involved using limestone or dolomite as a SO2-sorbent in a FBC combustor following cyclic CO2 capture. Highly sintered limestone is a good sorbent for SO2 because of the generation of macropores during calcination/carbonation cycling.

  9. SMA texture and reorientation: simulations and neutron diffraction studies

    NASA Astrophysics Data System (ADS)

    Gao, Xiujie; Brown, Donald W.; Brinson, L. Catherine

    2005-05-01

    With increased usage of shape memory alloys (SMA) for applications in various fields, it is important to understand how the material behavior is affected by factors such as texture, stress state and loading history, especially for complex multiaxial loading states. Using the in-situ neutron diffraction loading facility (SMARTS diffractometer) and ex situ inverse pole figure measurement facility (HIPPO diffractometer) at the Los Alamos Neutron Science Center (LANCE), the macroscopic mechanical behavior and texture evolution of Nickel-Titanium (Nitinol) SMAs under sequential compression in alternating directions were studied. The simplified multivariant model developed at Northwestern University was then used to simulate the macroscopic behavior and the microstructural change of Nitinol under this sequential loading. Pole figures were obtained via post-processing of the multivariant results for volume fraction evolution and compared quantitatively well to the experimental results. The experimental results can also be used to test or verify other SMA constitutive models.

  10. Sequential causal inference: Application to randomized trials of adaptive treatment strategies

    PubMed Central

    Dawson, Ree; Lavori, Philip W.

    2009-01-01

    SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714

  11. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE PAGES

    Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...

    2016-01-29

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  12. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shuozhi; Xiong, Liming; Chen, Youping

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  13. Suppressing correlations in massively parallel simulations of lattice models

    NASA Astrophysics Data System (ADS)

    Kelling, Jeffrey; Ódor, Géza; Gemming, Sibylle

    2017-11-01

    For lattice Monte Carlo simulations parallelization is crucial to make studies of large systems and long simulation time feasible, while sequential simulations remain the gold-standard for correlation-free dynamics. Here, various domain decomposition schemes are compared, concluding with one which delivers virtually correlation-free simulations on GPUs. Extensive simulations of the octahedron model for 2 + 1 dimensional Kardar-Parisi-Zhang surface growth, which is very sensitive to correlation in the site-selection dynamics, were performed to show self-consistency of the parallel runs and agreement with the sequential algorithm. We present a GPU implementation providing a speedup of about 30 × over a parallel CPU implementation on a single socket and at least 180 × with respect to the sequential reference.

  14. CFD simulation of hemodynamics in sequential and individual coronary bypass grafts based on multislice CT scan datasets.

    PubMed

    Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh

    2012-01-01

    There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.

  15. Parallel Multi-cycle LES of an Optical Pent-roof DISI Engine Under Motored Operating Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Dam, Noah; Sjöberg, Magnus; Zeng, Wei

    The use of Large-eddy Simulations (LES) has increased due to their ability to resolve the turbulent fluctuations of engine flows and capture the resulting cycle-to-cycle variability. One drawback of LES, however, is the requirement to run multiple engine cycles to obtain the necessary cycle statistics for full validation. The standard method to obtain the cycles by running a single simulation through many engine cycles sequentially can take a long time to complete. Recently, a new strategy has been proposed by our research group to reduce the amount of time necessary to simulate the many engine cycles by running individual enginemore » cycle simulations in parallel. With modern large computing systems this has the potential to reduce the amount of time necessary for a full set of simulated engine cycles to finish by up to an order of magnitude. In this paper, the Parallel Perturbation Methodology (PPM) is used to simulate up to 35 engine cycles of an optically accessible, pent-roof Directinjection Spark-ignition (DISI) engine at two different motored engine operating conditions, one throttled and one un-throttled. Comparisons are made against corresponding sequential-cycle simulations to verify the similarity of results using either methodology. Mean results from the PPM approach are very similar to sequential-cycle results with less than 0.5% difference in pressure and a magnitude structure index (MSI) of 0.95. Differences in cycle-to-cycle variability (CCV) predictions are larger, but close to the statistical uncertainty in the measurement for the number of cycles simulated. PPM LES results were also compared against experimental data. Mean quantities such as pressure or mean velocities were typically matched to within 5- 10%. Pressure CCVs were under-predicted, mostly due to the lack of any perturbations in the pressure boundary conditions between cycles. Velocity CCVs for the simulations had the same average magnitude as experiments, but the experimental data showed greater spatial variation in the root-mean-square (RMS). Conversely, circular standard deviation results showed greater repeatability of the flow directionality and swirl vortex positioning than the simulations.« less

  16. The VENUS/NWChem software package. Tight coupling between chemical dynamics simulations and electronic structure theory

    NASA Astrophysics Data System (ADS)

    Lourderaj, Upakarasamy; Sun, Rui; Kohale, Swapnil C.; Barnes, George L.; de Jong, Wibe A.; Windus, Theresa L.; Hase, William L.

    2014-03-01

    The interface for VENUS and NWChem, and the resulting software package for direct dynamics simulations are described. The coupling of the two codes is considered to be a tight coupling since the two codes are compiled and linked together and act as one executable with data being passed between the two codes through routine calls. The advantages of this type of coupling are discussed. The interface has been designed to have as little interference as possible with the core codes of both VENUS and NWChem. VENUS is the code that propagates the direct dynamics trajectories and, therefore, is the program that drives the overall execution of VENUS/NWChem. VENUS has remained an essentially sequential code, which uses the highly parallel structure of NWChem. Subroutines of the interface that accomplish the data transmission and communication between the two computer programs are described. Recent examples of the use of VENUS/NWChem for direct dynamics simulations are summarized.

  17. A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL)

    NASA Technical Reports Server (NTRS)

    Carroll, Chester C.; Owen, Jeffrey E.

    1988-01-01

    A direct-execution parallel architecture for the Advanced Continuous Simulation Language (ACSL) is presented which overcomes the traditional disadvantages of simulations executed on a digital computer. The incorporation of parallel processing allows the mapping of simulations into a digital computer to be done in the same inherently parallel manner as they are currently mapped onto an analog computer. The direct-execution format maximizes the efficiency of the executed code since the need for a high level language compiler is eliminated. Resolution is greatly increased over that which is available with an analog computer without the sacrifice in execution speed normally expected with digitial computer simulations. Although this report covers all aspects of the new architecture, key emphasis is placed on the processing element configuration and the microprogramming of the ACLS constructs. The execution times for all ACLS constructs are computed using a model of a processing element based on the AMD 29000 CPU and the AMD 29027 FPU. The increase in execution speed provided by parallel processing is exemplified by comparing the derived execution times of two ACSL programs with the execution times for the same programs executed on a similar sequential architecture.

  18. Radiative transport produced by oblique illumination of turbid media with collimated beams

    NASA Astrophysics Data System (ADS)

    Gardner, Adam R.; Kim, Arnold D.; Venugopalan, Vasan

    2013-06-01

    We examine the general problem of light transport initiated by oblique illumination of a turbid medium with a collimated beam. This situation has direct relevance to the analysis of cloudy atmospheres, terrestrial surfaces, soft condensed matter, and biological tissues. We introduce a solution approach to the equation of radiative transfer that governs this problem, and develop a comprehensive spherical harmonics expansion method utilizing Fourier decomposition (SHEFN). The SHEFN approach enables the solution of problems lacking azimuthal symmetry and provides both the spatial and directional dependence of the radiance. We also introduce the method of sequential-order smoothing that enables the calculation of accurate solutions from the results of two sequential low-order approximations. We apply the SHEFN approach to determine the spatial and angular dependence of both internal and boundary radiances from strongly and weakly scattering turbid media. These solutions are validated using more costly Monte Carlo simulations and reveal important insights regarding the evolution of the radiant field generated by oblique collimated beams spanning ballistic and diffusely scattering regimes.

  19. Single step sequential polydimethylsiloxane wet etching to fabricate a microfluidic channel with various cross-sectional geometries

    NASA Astrophysics Data System (ADS)

    Wang, C.-K.; Liao, W.-H.; Wu, H.-M.; Lo, Y.-H.; Lin, T.-R.; Tung, Y.-C.

    2017-11-01

    Polydimethylsiloxane (PDMS) has become a widely used material to construct microfluidic devices for various biomedical and chemical applications due to its desirable material properties and manufacturability. PDMS microfluidic devices are usually fabricated using soft lithography replica molding methods with master molds made of photolithogrpahy patterned photoresist layers on silicon wafers. The fabricated microfluidic channels often have rectangular cross-sectional geometries with single or multiple heights. In this paper, we develop a single step sequential PDMS wet etching process that can be used to fabricate microfluidic channels with various cross-sectional geometries from single-layer PDMS microfluidic channels. The cross-sections of the fabricated channel can be non-rectangular, and varied along the flow direction. Furthermore, the fabricated cross-sectional geometries can be numerically simulated beforehand. In the experiments, we fabricate microfluidic channels with various cross-sectional geometries using the developed technique. In addition, we fabricate a microfluidic mixer with alternative mirrored cross-sectional geometries along the flow direction to demonstrate the practical usage of the developed technique.

  20. Learning directed acyclic graphs from large-scale genomics data.

    PubMed

    Nikolay, Fabio; Pesavento, Marius; Kritikos, George; Typas, Nassos

    2017-09-20

    In this paper, we consider the problem of learning the genetic interaction map, i.e., the topology of a directed acyclic graph (DAG) of genetic interactions from noisy double-knockout (DK) data. Based on a set of well-established biological interaction models, we detect and classify the interactions between genes. We propose a novel linear integer optimization program called the Genetic-Interactions-Detector (GENIE) to identify the complex biological dependencies among genes and to compute the DAG topology that matches the DK measurements best. Furthermore, we extend the GENIE program by incorporating genetic interaction profile (GI-profile) data to further enhance the detection performance. In addition, we propose a sequential scalability technique for large sets of genes under study, in order to provide statistically significant results for real measurement data. Finally, we show via numeric simulations that the GENIE program and the GI-profile data extended GENIE (GI-GENIE) program clearly outperform the conventional techniques and present real data results for our proposed sequential scalability technique.

  1. Multiple point statistical simulation using uncertain (soft) conditional data

    NASA Astrophysics Data System (ADS)

    Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou

    2018-05-01

    Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.

  2. STOCHSIMGPU: parallel stochastic simulation for the Systems Biology Toolbox 2 for MATLAB.

    PubMed

    Klingbeil, Guido; Erban, Radek; Giles, Mike; Maini, Philip K

    2011-04-15

    The importance of stochasticity in biological systems is becoming increasingly recognized and the computational cost of biologically realistic stochastic simulations urgently requires development of efficient software. We present a new software tool STOCHSIMGPU that exploits graphics processing units (GPUs) for parallel stochastic simulations of biological/chemical reaction systems and show that significant gains in efficiency can be made. It is integrated into MATLAB and works with the Systems Biology Toolbox 2 (SBTOOLBOX2) for MATLAB. The GPU-based parallel implementation of the Gillespie stochastic simulation algorithm (SSA), the logarithmic direct method (LDM) and the next reaction method (NRM) is approximately 85 times faster than the sequential implementation of the NRM on a central processing unit (CPU). Using our software does not require any changes to the user's models, since it acts as a direct replacement of the stochastic simulation software of the SBTOOLBOX2. The software is open source under the GPL v3 and available at http://www.maths.ox.ac.uk/cmb/STOCHSIMGPU. The web site also contains supplementary information. klingbeil@maths.ox.ac.uk Supplementary data are available at Bioinformatics online.

  3. Sequential Computerized Mastery Tests--Three Simulation Studies

    ERIC Educational Resources Information Center

    Wiberg, Marie

    2006-01-01

    A simulation study of a sequential computerized mastery test is carried out with items modeled with the 3 parameter logistic item response theory model. The examinees' responses are either identically distributed, not identically distributed, or not identically distributed together with estimation errors in the item characteristics. The…

  4. Managing numerical errors in random sequential adsorption

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Nowak, Aleksandra

    2016-09-01

    Aim of this study is to examine the influence of a finite surface size and a finite simulation time on a packing fraction estimated using random sequential adsorption simulations. The goal of particular interest is providing hints on simulation setup to achieve desired level of accuracy. The analysis is based on properties of saturated random packing of disks on continuous and flat surfaces of different sizes.

  5. Numerical Simulation of Rolling-Airframes Using a Multi-Level Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A supersonic rolling missile with two synchronous canard control surfaces is analyzed using an automated, inviscid, Cartesian method. Sequential-static and time-dependent dynamic simulations of the complete motion are computed for canard dither schedules for level flight, pitch, and yaw maneuver. The dynamic simulations are compared directly against both high-resolution viscous simulations and relevant experimental data, and are also utilized to compute dynamic stability derivatives. The results show that both the body roll rate and canard dither motion influence the roll-averaged forces and moments on the body. At the relatively, low roll rates analyzed in the current work these dynamic effects are modest, however the dynamic computations are effective in predicting the dynamic stability derivatives which can be significant for highly-maneuverable missiles.

  6. First-principles simulations of heat transport

    NASA Astrophysics Data System (ADS)

    Puligheddu, Marcello; Gygi, Francois; Galli, Giulia

    2017-11-01

    Advances in understanding heat transport in solids were recently reported by both experiment and theory. However an efficient and predictive quantum simulation framework to investigate thermal properties of solids, with the same complexity as classical simulations, has not yet been developed. Here we present a method to compute the thermal conductivity of solids by performing ab initio molecular dynamics at close to equilibrium conditions, which only requires calculations of first-principles trajectories and atomic forces, thus avoiding direct computation of heat currents and energy densities. In addition the method requires much shorter sequential simulation times than ordinary molecular dynamics techniques, making it applicable within density functional theory. We discuss results for a representative oxide, MgO, at different temperatures and for ordered and nanostructured morphologies, showing the performance of the method in different conditions.

  7. Vertical drying of a suspension of sticks: Monte Carlo simulation for continuous two-dimensional problem

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Tarasevich, Yuri Yu.; Vygornitskii, Nikolai V.

    2018-02-01

    The vertical drying of a two-dimensional colloidal film containing zero-thickness sticks (lines) was studied by means of kinetic Monte Carlo (MC) simulations. The continuous two-dimensional problem for both the positions and orientations was considered. The initial state before drying was produced using a model of random sequential adsorption with isotropic orientations of the sticks. During the evaporation, an upper interface falls with a linear velocity in the vertical direction, and the sticks undergo translational and rotational Brownian motions. The MC simulations were run at different initial number concentrations (the numbers of sticks per unit area), pi, and solvent evaporation rates, u . For completely dried films, the spatial distributions of the sticks, the order parameters, and the electrical conductivities of the films in both the horizontal, x , and vertical, y , directions were examined. Significant evaporation-driven self-assembly and stratification of the sticks in the vertical direction was observed. The extent of stratification increased with increasing values of u . The anisotropy of the electrical conductivity of the film can be finely regulated by changes in the values of pi and u .

  8. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  9. A near-optimal guidance for cooperative docking maneuvers

    NASA Astrophysics Data System (ADS)

    Ciarcià, Marco; Grompone, Alessio; Romano, Marcello

    2014-09-01

    In this work we study the problem of minimum energy docking maneuvers between two Floating Spacecraft Simulators. The maneuvers are planar and conducted autonomously in a cooperative mode. The proposed guidance strategy is based on the direct method known as Inverse Dynamics in the Virtual Domain, and the nonlinear programming solver known as Sequential Gradient-Restoration Algorithm. The combination of these methods allows for the quick prototyping of near-optimal trajectories, and results in an implementable tool for real-time closed-loop maneuvering. The experimental results included in this paper were obtained by exploiting the recently upgraded Floating Spacecraft-Simulator Testbed of the Spacecraft Robotics Laboratory at the Naval Postgraduate School. A direct performances comparison, in terms of maneuver energy and propellant mass, between the proposed guidance strategy and a LQR controller, demonstrates the effectiveness of the method.

  10. The involvement of immunoglobulin E isotype switch in scleroderma skin tissue.

    PubMed

    Ohtsuka, Tsutomu; Yamazaki, Soji

    2005-08-01

    The involvement of mast cell, which is activated by immunoglobulin E (IgE), has been reported in the formation of systemic sclerosis (SSc) abnormality. IgE is generated with isotype switch. During isotype switch, switch circles resulting from direct mu to epsilon, or from sequential mu to gamma via epsilon switching will be created. We studied whether switching occurs in SSc. We used nested polymerase chain reaction to analyze the S fragments from switch circles. Fifty-two patients with SSc, and 62 healthy women were studied. Neither of 62 normal skin tissues showed direct switch, nor sequential switch. Neither of seven normal whole blood cells showed direct switch, nor sequential switch. In 52SSc skin tissues, three (5.8%) showed direct switch, and two (3.8%) showed sequential switch. As a result, five (9.6%) of SSc skin tissue showed immunogobulin E class switch. These results were confirmed by DNA sequencing. These results demonstrated that isotype switch to the epsilon locus achieved by direct and/or sequential switch are involved in SSc skin.

  11. Three-dimensional mapping of equiprobable hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shirley, C.; Pohlmann, K.; Andricevic, R.

    1996-09-01

    Geological and geophysical data are used with the sequential indicator simulation algorithm of Gomez-Hernandez and Srivastava to produce multiple, equiprobable, three-dimensional maps of informal hydrostratigraphic units at the Frenchman Flat Corrective Action Unit, Nevada Test Site. The upper 50 percent of the Tertiary volcanic lithostratigraphic column comprises the study volume. Semivariograms are modeled from indicator-transformed geophysical tool signals. Each equiprobable study volume is subdivided into discrete classes using the ISIM3D implementation of the sequential indicator simulation algorithm. Hydraulic conductivity is assigned within each class using the sequential Gaussian simulation method of Deutsch and Journel. The resulting maps show the contiguitymore » of high and low hydraulic conductivity regions.« less

  12. Experimental Array for Generating Dual Circularly-Polarized Dual-Mode OAM Radio Beams.

    PubMed

    Bai, Xu-Dong; Liang, Xian-Ling; Sun, Yun-Tao; Hu, Peng-Cheng; Yao, Yu; Wang, Kun; Geng, Jun-Ping; Jin, Rong-Hong

    2017-01-10

    Recently, vortex beam carrying orbital angular momentum (OAM) for radio communications has attracted much attention for its potential of transmitting multiple signals simultaneously at the same frequency, which can be used to increase the channel capacity. However, most of the methods for getting multi-mode OAM radio beams are of complicated structure and very high cost. This paper provides an effective solution of generating dual circularly-polarized (CP) dual-mode OAM beams. The antenna consists of four dual-CP elements which are sequentially rotated 90 degrees in the clockwise direction. Different from all previous published research relating to OAM generation by phased arrays, the four elements are fed with the same phase for both left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). The dual-mode operation for OAM is achieved through the opposite phase differences generated for LHCP and RHCP, when the dual-CP elements are sequentially rotated in the clockwise direction. The measured results coincide well with the simulated ones, which verified the effectiveness of the proposed design.

  13. Sequential biases in accumulating evidence

    PubMed Central

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  14. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2002-01-01

    In this paper we present a comparison of optimization approaches to the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP), Quasi-Newton, Simplex, Genetic Algorithms, and Simulated Annealing. Each method is applied to a variety of test cases including, circular to circular coplanar orbits, LEO to GEO, and orbit phasing in highly elliptic orbits. We also compare different constrained optimization routines on complex orbit rendezvous problems with complicated, highly nonlinear constraints.

  15. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  16. Airborne Network Data Availability Using Peer to Peer Database Replication on a Distributed Hash Table

    DTIC Science & Technology

    2013-03-01

    DSR Dynamic Source Routing DSSS Direct -sequence spread spectrum GUID Globally Unique ID MANET Mobile Ad-hoc Network NS3 Network Simulator 3 OLSR...networking schemes for safe maneuvering and data communication. Imagine needing to maintain an operational picture of an overall environment using a...as simple as O(n) where every node is sequentially queried to O log(n), or O(1). These schemes will be discussed with each individual DHT. Four of the

  17. Simulation of Peptides at Aqueous Interfaces

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew; Wilson, M.; Chipot, C.; DeVincenzi, Donald L. (Technical Monitor)

    2001-01-01

    Behavior of peptides at water-membrane interfaces is of great interest in studies on cellular transport and signaling, membrane fusion, and the action of toxins and antibiotics. Many peptides, which exist in water only as random coils, can form sequence-dependent, ordered structures at aqueous interfaces, incorporate into membranes and self-assembly into functional units, such as simple ion channels. Multi -nanosecond molecular dynamics simulations have been carried out to study the mechanism and energetics of interfacial folding of both non-polar and amphiphilic peptides, their insertion into membranes and association into higher-order structures. The simulations indicate that peptides fold non-sequentially, often through a series of amphiphilic intermediates. They further incorporate into the membrane in a preferred direction as folded monomers, and only then aggregate into dimers and, possibly, further into "dimers of dimers".

  18. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    PubMed

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  19. Intra-individual diagnostic image quality and organ-specific-radiation dose comparison between spiral cCT with iterative image reconstruction and z-axis automated tube current modulation and sequential cCT.

    PubMed

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Gawlitza, Joshua; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Groden, Christoph; Henzler, Thomas

    2016-01-01

    To prospectively evaluate image quality and organ-specific-radiation dose of spiral cranial CT (cCT) combined with automated tube current modulation (ATCM) and iterative image reconstruction (IR) in comparison to sequential tilted cCT reconstructed with filtered back projection (FBP) without ATCM. 31 patients with a previous performed tilted non-contrast enhanced sequential cCT aquisition on a 4-slice CT system with only FBP reconstruction and no ATCM were prospectively enrolled in this study for a clinical indicated cCT scan. All spiral cCT examinations were performed on a 3rd generation dual-source CT system using ATCM in z-axis direction. Images were reconstructed using both, FBP and IR (level 1-5). A Monte-Carlo-simulation-based analysis was used to compare organ-specific-radiation dose. Subjective image quality for various anatomic structures was evaluated using a 4-point Likert-scale and objective image quality was evaluated by comparing signal-to-noise ratios (SNR). Spiral cCT led to a significantly lower (p < 0.05) organ-specific-radiation dose in all targets including eye lense. Subjective image quality of spiral cCT datasets with an IR reconstruction level 5 was rated significantly higher compared to the sequential cCT acquisitions (p < 0.0001). Consecutive mean SNR was significantly higher in all spiral datasets (FBP, IR 1-5) when compared to sequential cCT with a mean SNR improvement of 44.77% (p < 0.0001). Spiral cCT combined with ATCM and IR allows for significant-radiation dose reduction including a reduce eye lens organ-dose when compared to a tilted sequential cCT while improving subjective and objective image quality.

  20. Devaluation and sequential decisions: linking goal-directed and model-based behavior

    PubMed Central

    Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian

    2014-01-01

    In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310

  1. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    PubMed

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  3. Multidimensional oriented solid-state NMR experiments enable the sequential assignment of uniformly 15N labeled integral membrane proteins in magnetically aligned lipid bilayers.

    PubMed

    Mote, Kaustubh R; Gopinath, T; Traaseth, Nathaniel J; Kitchen, Jason; Gor'kov, Peter L; Brey, William W; Veglia, Gianluigi

    2011-11-01

    Oriented solid-state NMR is the most direct methodology to obtain the orientation of membrane proteins with respect to the lipid bilayer. The method consists of measuring (1)H-(15)N dipolar couplings (DC) and (15)N anisotropic chemical shifts (CSA) for membrane proteins that are uniformly aligned with respect to the membrane bilayer. A significant advantage of this approach is that tilt and azimuthal (rotational) angles of the protein domains can be directly derived from analytical expression of DC and CSA values, or, alternatively, obtained by refining protein structures using these values as harmonic restraints in simulated annealing calculations. The Achilles' heel of this approach is the lack of suitable experiments for sequential assignment of the amide resonances. In this Article, we present a new pulse sequence that integrates proton driven spin diffusion (PDSD) with sensitivity-enhanced PISEMA in a 3D experiment ([(1)H,(15)N]-SE-PISEMA-PDSD). The incorporation of 2D (15)N/(15)N spin diffusion experiments into this new 3D experiment leads to the complete and unambiguous assignment of the (15)N resonances. The feasibility of this approach is demonstrated for the membrane protein sarcolipin reconstituted in magnetically aligned lipid bicelles. Taken with low electric field probe technology, this approach will propel the determination of sequential assignment as well as structure and topology of larger integral membrane proteins in aligned lipid bilayers. © Springer Science+Business Media B.V. 2011

  4. Space time modelling of air quality for environmental-risk maps: A case study in South Portugal

    NASA Astrophysics Data System (ADS)

    Soares, Amilcar; Pereira, Maria J.

    2007-10-01

    Since the 1960s, there has been a strong industrial development in the Sines area, on the southern Atlantic coast of Portugal, including the construction of an important industrial harbour and of, mainly, petrochemical and energy-related industries. These industries are, nowadays, responsible for substantial emissions of SO2, NOx, particles, VOCs and part of the ozone polluting the atmosphere. The major industries are spatially concentrated in a restricted area, very close to populated areas and natural resources such as those protected by the European Natura 2000 network. Air quality parameters are measured at the emissions' sources and at a few monitoring stations. Although air quality parameters are measured on an hourly basis, the lack of representativeness in space of these non-homogeneous phenomena makes even their representativeness in time questionable. Hence, in this study, the regional spatial dispersion of contaminants is also evaluated, using diffusive-sampler (Radiello Passive Sampler) campaigns during given periods. Diffusive samplers cover the entire space extensively, but just for a limited period of time. In the first step of this study, a space-time model of pollutants was built, based on a stochastic simulation-direct sequential simulation-with local spatial trend. The spatial dispersion of the contaminants for a given period of time-corresponding to the exposure time of the diffusive samplers-was computed by ordinary kriging. Direct sequential simulation was applied to produce equiprobable spatial maps for each day of that period, using the kriged map as a spatial trend and the daily measurements of pollutants from the monitoring stations as hard data. In the second step, the following environmental risk and costs maps were computed from the set of simulated realizations of pollutants: (i) maps of the contribution of each emission to the pollutant concentration at any spatial location; (ii) costs of badly located monitoring stations.

  5. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  6. Three-dimensional Stochastic Estimation of Porosity Distribution: Benefits of Using Ground-penetrating Radar Velocity Tomograms in Simulated-annealing-based or Bayesian Sequential Simulation Approaches

    DTIC Science & Technology

    2012-05-30

    annealing-based or Bayesian sequential simulation approaches B. Dafflon1,2 and W. Barrash1 Received 13 May 2011; revised 12 March 2012; accepted 17 April 2012...the withheld porosity log are also withheld for this estimation process. For both cases we do this for two wells having locally variable stratigraphy ...borehole location is given at the bottom of each log comparison panel. For comparison with stratigraphy at the BHRS, contacts between Units 1 to 4

  7. Predictive Place-Cell Sequences for Goal-Finding Emerge from Goal Memory and the Cognitive Map: A Computational Model

    PubMed Central

    Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.

    2017-01-01

    Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187

  8. Development, Characterization, and Resultant Properties of a Carbon, Boron, and Chromium Ternary Diffusion System

    NASA Astrophysics Data System (ADS)

    Domec, Brennan S.

    In today's industry, engineering materials are continuously pushed to the limits. Often, the application only demands high-specification properties in a narrowly-defined region of the material, such as the outermost surface. This, in combination with the economic benefits, makes case hardening an attractive solution to meet industry demands. While case hardening has been in use for decades, applications demanding high hardness, deep case depth, and high corrosion resistance are often under-served by this process. Instead, new solutions are required. The goal of this study is to develop and characterize a new borochromizing process applied to a pre-carburized AISI 8620 alloy steel. The process was successfully developed using a combination of computational simulations, calculations, and experimental testing. Process kinetics were studied by fitting case depth measurement data to Fick's Second Law of Diffusion and an Arrhenius equation. Results indicate that the kinetics of the co-diffusion method are unaffected by the addition of chromium to the powder pack. The results also show that significant structural degradation of the case occurs when chromizing is applied sequentially to an existing boronized case. The amount of degradation is proportional to the chromizing parameters. Microstructural evolution was studied using metallographic methods, simulation and computational calculations, and analytical techniques. While the co-diffusion process failed to enrich the substrate with chromium, significant enrichment is obtained with the sequential diffusion process. The amount of enrichment is directly proportional to the chromizing parameters with higher parameters resulting in more enrichment. The case consists of M7C3 and M23C6 carbides nearest the surface, minor amounts of CrB, and a balance of M2B. Corrosion resistance was measured with salt spray and electrochemical methods. These methods confirm the benefit of surface enrichment by chromium in the sequential diffusion method with corrosion resistance increasing directly with chromium concentration. The results also confirm the deleterious effect of surface-breaking case defects and the need to reduce or eliminate them. The best combination of microstructural integrity, mean surface hardness, effective case depth, and corrosion resistance is obtained in samples sequentially boronized and chromized at 870°C for 6hrs. Additional work is required to further optimize process parameters and case properties.

  9. Modelling Geomechanical Heterogeneity of Rock Masses Using Direct and Indirect Geostatistical Conditional Simulation Methods

    NASA Astrophysics Data System (ADS)

    Eivazy, Hesameddin; Esmaieli, Kamran; Jean, Raynald

    2017-12-01

    An accurate characterization and modelling of rock mass geomechanical heterogeneity can lead to more efficient mine planning and design. Using deterministic approaches and random field methods for modelling rock mass heterogeneity is known to be limited in simulating the spatial variation and spatial pattern of the geomechanical properties. Although the applications of geostatistical techniques have demonstrated improvements in modelling the heterogeneity of geomechanical properties, geostatistical estimation methods such as Kriging result in estimates of geomechanical variables that are not fully representative of field observations. This paper reports on the development of 3D models for spatial variability of rock mass geomechanical properties using geostatistical conditional simulation method based on sequential Gaussian simulation. A methodology to simulate the heterogeneity of rock mass quality based on the rock mass rating is proposed and applied to a large open-pit mine in Canada. Using geomechanical core logging data collected from the mine site, a direct and an indirect approach were used to model the spatial variability of rock mass quality. The results of the two modelling approaches were validated against collected field data. The study aims to quantify the risks of pit slope failure and provides a measure of uncertainties in spatial variability of rock mass properties in different areas of the pit.

  10. Sequential Bayesian Geostatistical Inversion and Evaluation of Combined Data Worth for Aquifer Characterization at the Hanford 300 Area

    NASA Astrophysics Data System (ADS)

    Murakami, H.; Chen, X.; Hahn, M. S.; Over, M. W.; Rockhold, M. L.; Vermeul, V.; Hammond, G. E.; Zachara, J. M.; Rubin, Y.

    2010-12-01

    Subsurface characterization for predicting groundwater flow and contaminant transport requires us to integrate large and diverse datasets in a consistent manner, and quantify the associated uncertainty. In this study, we sequentially assimilated multiple types of datasets for characterizing a three-dimensional heterogeneous hydraulic conductivity field at the Hanford 300 Area. The datasets included constant-rate injection tests, electromagnetic borehole flowmeter tests, lithology profile and tracer tests. We used the method of anchored distributions (MAD), which is a modular-structured Bayesian geostatistical inversion method. MAD has two major advantages over the other inversion methods. First, it can directly infer a joint distribution of parameters, which can be used as an input in stochastic simulations for prediction. In MAD, in addition to typical geostatistical structural parameters, the parameter vector includes multiple point values of the heterogeneous field, called anchors, which capture local trends and reduce uncertainty in the prediction. Second, MAD allows us to integrate the datasets sequentially in a Bayesian framework such that it updates the posterior distribution, as a new dataset is included. The sequential assimilation can decrease computational burden significantly. We applied MAD to assimilate different combinations of the datasets, and then compared the inversion results. For the injection and tracer test assimilation, we calculated temporal moments of pressure build-up and breakthrough curves, respectively, to reduce the data dimension. A massive parallel flow and transport code PFLOTRAN is used for simulating the tracer test. For comparison, we used different metrics based on the breakthrough curves not used in the inversion, such as mean arrival time, peak concentration and early arrival time. This comparison intends to yield the combined data worth, i.e. which combination of the datasets is the most effective for a certain metric, which will be useful for guiding the further characterization effort at the site and also the future characterization projects at the other sites.

  11. A computationally efficient Bayesian sequential simulation approach for the assimilation of vast and diverse hydrogeophysical datasets

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Gloaguen, Erwan; Mariéthoz, Grégoire; Holliger, Klaus

    2016-04-01

    Bayesian sequential simulation (BSS) is a powerful geostatistical technique, which notably has shown significant potential for the assimilation of datasets that are diverse with regard to the spatial resolution and their relationship. However, these types of applications of BSS require a large number of realizations to adequately explore the solution space and to assess the corresponding uncertainties. Moreover, such simulations generally need to be performed on very fine grids in order to adequately exploit the technique's potential for characterizing heterogeneous environments. Correspondingly, the computational cost of BSS algorithms in their classical form is very high, which so far has limited an effective application of this method to large models and/or vast datasets. In this context, it is also important to note that the inherent assumption regarding the independence of the considered datasets is generally regarded as being too strong in the context of sequential simulation. To alleviate these problems, we have revisited the classical implementation of BSS and incorporated two key features to increase the computational efficiency. The first feature is a combined quadrant spiral - superblock search, which targets run-time savings on large grids and adds flexibility with regard to the selection of neighboring points using equal directional sampling and treating hard data and previously simulated points separately. The second feature is a constant path of simulation, which enhances the efficiency for multiple realizations. We have also modified the aggregation operator to be more flexible with regard to the assumption of independence of the considered datasets. This is achieved through log-linear pooling, which essentially allows for attributing weights to the various data components. Finally, a multi-grid simulating path was created to enforce large-scale variance and to allow for adapting parameters, such as, for example, the log-linear weights or the type of simulation path at various scales. The newly implemented search method for kriging reduces the computational cost from an exponential dependence with regard to the grid size in the original algorithm to a linear relationship, as each neighboring search becomes independent from the grid size. For the considered examples, our results show a sevenfold reduction in run time for each additional realization when a constant simulation path is used. The traditional criticism that constant path techniques introduce a bias to the simulations was explored and our findings do indeed reveal a minor reduction in the diversity of the simulations. This bias can, however, be largely eliminated by changing the path type at different scales through the use of the multi-grid approach. Finally, we show that adapting the aggregation weight at each scale considered in our multi-grid approach allows for reproducing both the variogram and histogram, and the spatial trend of the underlying data.

  12. Monte Carlo simulation of evaporation-driven self-assembly in suspensions of colloidal rods

    NASA Astrophysics Data System (ADS)

    Lebovka, Nikolai I.; Vygornitskii, Nikolai V.; Gigiberiya, Volodymyr A.; Tarasevich, Yuri Yu.

    2016-12-01

    The vertical drying of a colloidal film containing rodlike particles was studied by means of kinetic Monte Carlo (MC) simulation. The problem was approached using a two-dimensional square lattice, and the rods were represented as linear k -mers (i.e., particles occupying k adjacent sites). The initial state before drying was produced using a model of random sequential adsorption (RSA) with isotropic orientations of the k -mers (orientation of the k -mers along horizontal x and vertical y directions are equiprobable). In the RSA model, overlapping of the k -mers is forbidden. During the evaporation, an upper interface falls with a linear velocity of u in the vertical direction and the k -mers undergo translation Brownian motion. The MC simulations were run at different initial concentrations, pi, (pi∈[0 ,pj] , where pj is the jamming concentration), lengths of k -mers (k ∈[1 ,12 ] ), and solvent evaporation rates, u . For completely dried films, the spatial distributions of k -mers and their electrical conductivities in both x and y directions were examined. Significant evaporation-driven self-assembly and orientation stratification of the k -mers oriented along the x and y directions were observed. The extent of stratification increased with increasing value of k . The anisotropy of the electrical conductivity of the film can be finely regulated by changes in the values of pi, k , and u .

  13. On the Lulejian-I Combat Model

    DTIC Science & Technology

    1976-08-01

    possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the

  14. Lifelong Transfer Learning for Heterogeneous Teams of Agents in Sequential Decision Processes

    DTIC Science & Technology

    2016-06-01

    making (SDM) tasks in dynamic environments with simulated and physical robots . 15. SUBJECT TERMS Sequential decision making, lifelong learning, transfer...sequential decision-making (SDM) tasks in dynamic environments with both simple benchmark tasks and more complex aerial and ground robot tasks. Our work...and ground robots in the presence of disturbances: We applied our methods to the problem of learning controllers for robots with novel disturbances in

  15. Increasing efficiency of preclinical research by group sequential designs

    PubMed Central

    Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich

    2017-01-01

    Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371

  16. Three-body effects in the Hoyle-state decay

    NASA Astrophysics Data System (ADS)

    Refsgaard, J.; Fynbo, H. O. U.; Kirsebom, O. S.; Riisager, K.

    2018-04-01

    We use a sequential R-matrix model to describe the breakup of the Hoyle state into three α particles via the ground state of 8Be. It is shown that even in a sequential picture, features resembling a direct breakup branch appear in the phase-space distribution of the α particles. We construct a toy model to describe the Coulomb interaction in the three-body final state and its effects on the decay spectrum are investigated. The framework is also used to predict the phase-space distribution of the α particles emitted in a direct breakup of the Hoyle state and the possibility of interference between a direct and sequential branch is discussed. Our numerical results are compared to the current upper limit on the direct decay branch determined in recent experiments.

  17. Fictitious domain method for fully resolved reacting gas-solid flow simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Longhui; Liu, Kai; You, Changfu

    2015-10-01

    Fully resolved simulation (FRS) for gas-solid multiphase flow considers solid objects as finite sized regions in flow fields and their behaviours are predicted by solving equations in both fluid and solid regions directly. Fixed mesh numerical methods, such as fictitious domain method, are preferred in solving FRS problems and have been widely researched. However, for reacting gas-solid flows no suitable fictitious domain numerical method has been developed. This work presents a new fictitious domain finite element method for FRS of reacting particulate flows. Low Mach number reacting flow governing equations are solved sequentially on a regular background mesh. Particles are immersed in the mesh and driven by their surface forces and torques integrated on immersed interfaces. Additional treatments on energy and surface reactions are developed. Several numerical test cases validated the method and a burning carbon particles array falling simulation proved the capability for solving moving reacting particle cluster problems.

  18. Coarse-Grained Simulation of Solvated Cellulose Ib Microfibril

    NASA Astrophysics Data System (ADS)

    Fan, Bingxin; Maranas, Janna; Zhong, Linghao; Zhen Zhao Collaboration

    2013-03-01

    We construct a coarse-grained (CG) model of cellulose microfibrils in water. The force field is derived from atomistic simulation of a 40 glucose-unit-long microfibril by requiring consistency between the chain configuration, intermolecular packing and hydrogen bonding of the two levels of modeling. Intermolecular interactions such as hydrogen bonding are added sequentially until the force field holds the microfibril crystal structure. This stepwise process enables us to evaluate the importance of each potential and provides insight to ordered and disordered regions. We simulate cellulose microfibrils with 100 to 400 residues, comparable to the smallest observed microfibrils. Microfibrils longer than 100nm would form a bending region along their longitudinal direction. Multiple bends are observed in the microfibril containing 400 residues. Although the cause is not clear, the bending regions may provide us insights about the periodicity and the behavior of the disordered regions in the microfibril.

  19. Hybrid parallelization of the XTOR-2F code for the simulation of two-fluid MHD instabilities in tokamaks

    NASA Astrophysics Data System (ADS)

    Marx, Alain; Lütjens, Hinrich

    2017-03-01

    A hybrid MPI/OpenMP parallel version of the XTOR-2F code [Lütjens and Luciani, J. Comput. Phys. 229 (2010) 8130] solving the two-fluid MHD equations in full tokamak geometry by means of an iterative Newton-Krylov matrix-free method has been developed. The present work shows that the code has been parallelized significantly despite the numerical profile of the problem solved by XTOR-2F, i.e. a discretization with pseudo-spectral representations in all angular directions, the stiffness of the two-fluid stability problem in tokamaks, and the use of a direct LU decomposition to invert the physical pre-conditioner at every Krylov iteration of the solver. The execution time of the parallelized version is an order of magnitude smaller than the sequential one for low resolution cases, with an increasing speedup when the discretization mesh is refined. Moreover, it allows to perform simulations with higher resolutions, previously forbidden because of memory limitations.

  20. Krebs cycle metabolon formation: metabolite concentration gradient enhanced compartmentation of sequential enzymes.

    PubMed

    Wu, Fei; Pelster, Lindsey N; Minteer, Shelley D

    2015-01-25

    Dynamics of metabolon formation in mitochondria was probed by studying diffusional motion of two sequential Krebs cycle enzymes in a microfluidic channel. Enhanced directional co-diffusion of both enzymes against a substrate concentration gradient was observed in the presence of intermediate generation. This reveals a metabolite directed compartmentation of metabolic pathways.

  1. A Sequential Monte Carlo Approach for Streamflow Forecasting

    NASA Astrophysics Data System (ADS)

    Hsu, K.; Sorooshian, S.

    2008-12-01

    As alternatives to traditional physically-based models, Artificial Neural Network (ANN) models offer some advantages with respect to the flexibility of not requiring the precise quantitative mechanism of the process and the ability to train themselves from the data directly. In this study, an ANN model was used to generate one-day-ahead streamflow forecasts from the precipitation input over a catchment. Meanwhile, the ANN model parameters were trained using a Sequential Monte Carlo (SMC) approach, namely Regularized Particle Filter (RPF). The SMC approaches are known for their capabilities in tracking the states and parameters of a nonlinear dynamic process based on the Baye's rule and the proposed effective sampling and resampling strategies. In this study, five years of daily rainfall and streamflow measurement were used for model training. Variable sample sizes of RPF, from 200 to 2000, were tested. The results show that, after 1000 RPF samples, the simulation statistics, in terms of correlation coefficient, root mean square error, and bias, were stabilized. It is also shown that the forecasted daily flows fit the observations very well, with the correlation coefficient of higher than 0.95. The results of RPF simulations were also compared with those from the popular back-propagation ANN training approach. The pros and cons of using SMC approach and the traditional back-propagation approach will be discussed.

  2. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Using a water-confined carbon nanotube to probe the electricity of sequential charged segments of macromolecules

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Zhao, Yan-Jiao; Huang, Ji-Ping

    2012-07-01

    The detection of macromolecular conformation is particularly important in many physical and biological applications. Here we theoretically explore a method for achieving this detection by probing the electricity of sequential charged segments of macromolecules. Our analysis is based on molecular dynamics simulations, and we investigate a single file of water molecules confined in a half-capped single-walled carbon nanotube (SWCNT) with an external electric charge of +e or -e (e is the elementary charge). The charge is located in the vicinity of the cap of the SWCNT and along the centerline of the SWCNT. We reveal the picosecond timescale for the re-orientation (namely, from one unidirectional direction to the other) of the water molecules in response to a switch in the charge signal, -e → +e or +e → -e. Our results are well understood by taking into account the electrical interactions between the water molecules and between the water molecules and the external charge. Because such signals of re-orientation can be magnified and transported according to Tu et al. [2009 Proc. Natl. Acad. Sci. USA 106 18120], it becomes possible to record fingerprints of electric signals arising from sequential charged segments of a macromolecule, which are expected to be useful for recognizing the conformations of some particular macromolecules.

  4. Multiscale Modeling of Damage Processes in fcc Aluminum: From Atoms to Grains

    NASA Technical Reports Server (NTRS)

    Glaessgen, E. H.; Saether, E.; Yamakov, V.

    2008-01-01

    Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, current analysis is limited to small domains and increasing the size of the MD domain quickly presents intractable computational demands. A preferred approach to surmount this computational limitation has been to combine continuum mechanics-based modeling procedures, such as the finite element method (FEM), with MD analyses thereby reducing the region of atomic scale refinement. Such multiscale modeling strategies can be divided into two broad classifications: concurrent multiscale methods that directly incorporate an atomistic domain within a continuum domain and sequential multiscale methods that extract an averaged response from the atomistic simulation for later use as a constitutive model in a continuum analysis.

  5. Inhomogeneities detection in annual precipitation time series in Portugal using direct sequential simulation

    NASA Astrophysics Data System (ADS)

    Caineta, Júlio; Ribeiro, Sara; Costa, Ana Cristina; Henriques, Roberto; Soares, Amílcar

    2014-05-01

    Climate data homogenisation is of major importance in monitoring climate change, the validation of weather forecasting, general circulation and regional atmospheric models, modelling of erosion, drought monitoring, among other studies of hydrological and environmental impacts. This happens because non-climate factors can cause time series discontinuities which may hide the true climatic signal and patterns, thus potentially bias the conclusions of those studies. In the last two decades, many methods have been developed to identify and remove these inhomogeneities. One of those is based on geostatistical simulation (DSS - direct sequential simulation), where local probability density functions (pdf) are calculated at candidate monitoring stations, using spatial and temporal neighbouring observations, and then are used for detection of inhomogeneities. This approach has been previously applied to detect inhomogeneities in four precipitation series (wet day count) from a network with 66 monitoring stations located in the southern region of Portugal (1980-2001). This study revealed promising results and the potential advantages of geostatistical techniques for inhomogeneities detection in climate time series. This work extends the case study presented before and investigates the application of the geostatistical stochastic approach to ten precipitation series that were previously classified as inhomogeneous by one of six absolute homogeneity tests (Mann-Kendall test, Wald-Wolfowitz runs test, Von Neumann ratio test, Standard normal homogeneity test (SNHT) for a single break, Pettit test, and Buishand range test). Moreover, a sensibility analysis is implemented to investigate the number of simulated realisations that should be used to accurately infer the local pdfs. Accordingly, the number of simulations per iteration is increased from 50 to 500, which resulted in a more representative local pdf. A set of default and recommended settings is provided, which will help other users to implement this method. The need of user intervention is reduced to a minimum through the usage of a cross-platform script. Finally, as in the previous study, the results are compared with those from the SNHT, Pettit and Buishand range tests, which were applied to composite (ratio) reference series. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").

  6. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.

    1976-01-01

    The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.

  7. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  8. Sequential Dependencies in Driving

    ERIC Educational Resources Information Center

    Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.

    2012-01-01

    The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…

  9. J-adaptive estimation with estimated noise statistics

    NASA Technical Reports Server (NTRS)

    Jazwinski, A. H.; Hipkins, C.

    1973-01-01

    The J-adaptive sequential estimator is extended to include simultaneous estimation of the noise statistics in a model for system dynamics. This extension completely automates the estimator, eliminating the requirement of an analyst in the loop. Simulations in satellite orbit determination demonstrate the efficacy of the sequential estimation algorithm.

  10. Parallelization and automatic data distribution for nuclear reactor simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directlymore » affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.« less

  11. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.

  12. High data rate coding for the space station telemetry links.

    NASA Technical Reports Server (NTRS)

    Lumb, D. R.; Viterbi, A. J.

    1971-01-01

    Coding systems for high data rates were examined from the standpoint of potential application in space-station telemetry links. Approaches considered included convolutional codes with sequential, Viterbi, and cascaded-Viterbi decoding. It was concluded that a high-speed (40 Mbps) sequential decoding system best satisfies the requirements for the assumed growth potential and specified constraints. Trade-off studies leading to this conclusion are viewed, and some sequential (Fano) algorithm improvements are discussed, together with real-time simulation results.

  13. Concurrent processing simulation of the space station

    NASA Technical Reports Server (NTRS)

    Gluck, R.; Hale, A. L.; Sunkel, John W.

    1989-01-01

    The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.

  14. Building merger trees from cosmological N-body simulations. Towards improving galaxy formation models using subhaloes

    NASA Astrophysics Data System (ADS)

    Tweed, D.; Devriendt, J.; Blaizot, J.; Colombi, S.; Slyz, A.

    2009-11-01

    Context: In the past decade or so, using numerical N-body simulations to describe the gravitational clustering of dark matter (DM) in an expanding universe has become the tool of choice for tackling the issue of hierarchical galaxy formation. As mass resolution increases with the power of supercomputers, one is able to grasp finer and finer details of this process, resolving more and more of the inner structure of collapsed objects. This begs one to revisit time and again the post-processing tools with which one transforms particles into “invisible” dark matter haloes and from thereon into luminous galaxies. Aims: Although a fair amount of work has been devoted to growing Monte-Carlo merger trees that resemble those built from an N-body simulation, comparatively little effort has been invested in quantifying the caveats one necessarily encounters when one extracts trees directly from such a simulation. To somewhat revert the tide, this paper seeks to provide its reader with a comprehensive study of the problems one faces when following this route. Methods: The first step in building merger histories of dark matter haloes and their subhaloes is to identify these structures in each of the time outputs (snapshots) produced by the simulation. Even though we discuss a particular implementation of such an algorithm (called AdaptaHOP) in this paper, we believe that our results do not depend on the exact details of the implementation but instead extend to most if not all (sub)structure finders. To illustrate this point in the appendix we compare AdaptaHOP's results to the standard friend-of-friend (FOF) algorithm, widely utilised in the astrophysical community. We then highlight different ways of building merger histories from AdaptaHOP haloes and subhaloes, contrasting their various advantages and drawbacks. Results: We find that the best approach to (sub)halo merging histories is through an analysis that goes back and forth between identification and tree building rather than one that conducts a straightforward sequential treatment of these two steps. This is rooted in the complexity of the merging trees that have to depict an inherently dynamical process from the partial temporal information contained in the collection of instantaneous snapshots available from the N-body simulation. However, we also propose a simpler sequential “Most massive Substructure Method” (MSM) whose trees approximate those obtained via the more complicated non sequential method. Appendices are only available in electronic form at: http://www.aanda.org

  15. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations.

    PubMed

    Qin, Fangjun; Chang, Lubin; Jiang, Sai; Zha, Feng

    2018-05-03

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.

  16. A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations

    PubMed Central

    Qin, Fangjun; Jiang, Sai; Zha, Feng

    2018-01-01

    In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538

  17. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  18. Sequential use of simulation and optimization in analysis and planning

    Treesearch

    Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones

    2000-01-01

    Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...

  19. Comparison of Statistical Approaches Dealing with Time-dependent Confounding in Drug Effectiveness Studies

    PubMed Central

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W.; Tremlett, Helen

    2017-01-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models (MSCMs) are frequently used to deal with such confounding. To avoid some of the problems of fitting MSCM, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as MSCM in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995 – 2008). PMID:27659168

  20. Comparison of statistical approaches dealing with time-dependent confounding in drug effectiveness studies.

    PubMed

    Karim, Mohammad Ehsanul; Petkau, John; Gustafson, Paul; Platt, Robert W; Tremlett, Helen

    2018-06-01

    In longitudinal studies, if the time-dependent covariates are affected by the past treatment, time-dependent confounding may be present. For a time-to-event response, marginal structural Cox models are frequently used to deal with such confounding. To avoid some of the problems of fitting marginal structural Cox model, the sequential Cox approach has been suggested as an alternative. Although the estimation mechanisms are different, both approaches claim to estimate the causal effect of treatment by appropriately adjusting for time-dependent confounding. We carry out simulation studies to assess the suitability of the sequential Cox approach for analyzing time-to-event data in the presence of a time-dependent covariate that may or may not be a time-dependent confounder. Results from these simulations revealed that the sequential Cox approach is not as effective as marginal structural Cox model in addressing the time-dependent confounding. The sequential Cox approach was also found to be inadequate in the presence of a time-dependent covariate. We propose a modified version of the sequential Cox approach that correctly estimates the treatment effect in both of the above scenarios. All approaches are applied to investigate the impact of beta-interferon treatment in delaying disability progression in the British Columbia Multiple Sclerosis cohort (1995-2008).

  1. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation.

    PubMed

    Gaudrain, Etienne; Carlyon, Robert P

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.

  2. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation

    PubMed Central

    Gaudrain, Etienne; Carlyon, Robert P.

    2013-01-01

    Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish target and masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed. PMID:23297922

  3. Simulations of 6-DOF Motion with a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Murman, Scott M.; Aftosmis, Michael J.; Berger, Marsha J.; Kwak, Dochan (Technical Monitor)

    2003-01-01

    Coupled 6-DOF/CFD trajectory predictions using an automated Cartesian method are demonstrated by simulating a GBU-32/JDAM store separating from an F-18C aircraft. Numerical simulations are performed at two Mach numbers near the sonic speed, and compared with flight-test telemetry and photographic-derived data. Simulation results obtained with a sequential-static series of flow solutions are contrasted with results using a time-dependent flow solver. Both numerical methods show good agreement with the flight-test data through the first half of the simulations. The sequential-static and time-dependent methods diverge over the last half of the trajectory prediction. after the store produces peak angular rates. A cost comparison for the Cartesian method is included, in terms of absolute cost and relative to computing uncoupled 6-DOF trajectories. A detailed description of the 6-DOF method, as well as a verification of its accuracy, is provided in an appendix.

  4. Monte Carlo Simulation of Sudden Death Bearing Testing

    NASA Technical Reports Server (NTRS)

    Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.

    2003-01-01

    Monte Carlo simulations combined with sudden death testing were used to compare resultant bearing lives to the calculated hearing life and the cumulative test time and calendar time relative to sequential and censored sequential testing. A total of 30 960 virtual 50-mm bore deep-groove ball bearings were evaluated in 33 different sudden death test configurations comprising 36, 72, and 144 bearings each. Variations in both life and Weibull slope were a function of the number of bearings failed independent of the test method used and not the total number of bearings tested. Variation in L10 life as a function of number of bearings failed were similar to variations in lift obtained from sequentially failed real bearings and from Monte Carlo (virtual) testing of entire populations. Reductions up to 40 percent in bearing test time and calendar time can be achieved by testing to failure or the L(sub 50) life and terminating all testing when the last of the predetermined bearing failures has occurred. Sudden death testing is not a more efficient method to reduce bearing test time or calendar time when compared to censored sequential testing.

  5. Multi-Level Sequential Pattern Mining Based on Prime Encoding

    NASA Astrophysics Data System (ADS)

    Lianglei, Sun; Yun, Li; Jiang, Yin

    Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.

  6. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2012-09-01

    Recent observations show evidence that high-z (z ~ 2 - 3) early-type galaxies (ETGs) are quite compact than that with comparable mass at z ~ 0. Dry merger scenario is one of the most probable one that can explain such size evolution. However, previous studies based on this scenario do not succeed to explain both properties of high-z compact massive ETGs and local ETGs, consistently. We investigate effects of sequential, multiple dry minor (stellar mass ratio M2/M1<1/4) mergers on the size evolution of compact massive ETGs. We perform N-body simulations of the sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. We show that the sequential minor mergers of compact satellite galaxies are the most efficient in the size growth and in decrease of the velocity dispersion of the compact massive ETGs. The change of stellar size and density of the merger remnant is consistent with the recent observations. Furthermore, we construct the merger histories of candidates of high-z compact massive ETGs using the Millennium Simulation Database, and estimate the size growth of the galaxies by dry minor mergers. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained in the case of the sequential minor mergers in our simulations.

  7. Multiple Point Statistics algorithm based on direct sampling and multi-resolution images

    NASA Astrophysics Data System (ADS)

    Julien, S.; Renard, P.; Chugunova, T.

    2017-12-01

    Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.

  8. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data.

    PubMed

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-12

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  9. Uncertainty assessment of PM2.5 contamination mapping using spatiotemporal sequential indicator simulations and multi-temporal monitoring data

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang

    2016-04-01

    Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.

  10. Structural Optimization for Reliability Using Nonlinear Goal Programming

    NASA Technical Reports Server (NTRS)

    El-Sayed, Mohamed E.

    1999-01-01

    This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.

  11. Optical Tracking Data Validation and Orbit Estimation for Sparse Observations of Satellites by the OWL-Net.

    PubMed

    Choi, Jin; Jo, Jung Hyun; Yim, Hong-Suh; Choi, Eun-Jung; Cho, Sungki; Park, Jang-Hyun

    2018-06-07

    An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

  12. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  13. Improved coverage of cDNA-AFLP by sequential digestion of immobilized cDNA.

    PubMed

    Weiberg, Arne; Pöhler, Dirk; Morgenstern, Burkhard; Karlovsky, Petr

    2008-10-13

    cDNA-AFLP is a transcriptomics technique which does not require prior sequence information and can therefore be used as a gene discovery tool. The method is based on selective amplification of cDNA fragments generated by restriction endonucleases, electrophoretic separation of the products and comparison of the band patterns between treated samples and controls. Unequal distribution of restriction sites used to generate cDNA fragments negatively affects the performance of cDNA-AFLP. Some transcripts are represented by more than one fragment while other escape detection, causing redundancy and reducing the coverage of the analysis, respectively. With the goal of improving the coverage of cDNA-AFLP without increasing its redundancy, we designed a modified cDNA-AFLP protocol. Immobilized cDNA is sequentially digested with several restriction endonucleases and the released DNA fragments are collected in mutually exclusive pools. To investigate the performance of the protocol, software tool MECS (Multiple Enzyme cDNA-AFLP Simulation) was written in Perl. cDNA-AFLP protocols described in the literature and the new sequential digestion protocol were simulated on sets of cDNA sequences from mouse, human and Arabidopsis thaliana. The redundancy and coverage, the total number of PCR reactions, and the average fragment length were calculated for each protocol and cDNA set. Simulation revealed that sequential digestion of immobilized cDNA followed by the partitioning of released fragments into mutually exclusive pools outperformed other cDNA-AFLP protocols in terms of coverage, redundancy, fragment length, and the total number of PCRs. Primers generating 30 to 70 amplicons per PCR provided the highest fraction of electrophoretically distinguishable fragments suitable for normalization. For A. thaliana, human and mice transcriptome, the use of two marking enzymes and three sequentially applied releasing enzymes for each of the marking enzymes is recommended.

  14. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  15. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  16. Migration of formaldehyde from melamine-ware: UK 2008 survey results.

    PubMed

    Potter, E L J; Bradley, E L; Davies, C R; Barnes, K A; Castle, L

    2010-06-01

    Fifty melamine-ware articles were tested for the migration of formaldehyde - with hexamethylenetetramine (HMTA) expressed as formaldehyde - to see whether the total specific migration limit (SML(T)) was being observed. The SML(T), given in European Commission Directive 2002/72/EC as amended, is 15 mg kg(-1). Fourier transform-infrared (FT-IR) spectroscopy was carried out on the articles to confirm the plastic type. Articles were exposed to the food simulant 3% (w/v) aqueous acetic acid under conditions representing their worst foreseeable use. Formaldehyde and HMTA in food simulants were determined by a spectrophotometric derivatization procedure. Positive samples were confirmed by a second spectrophotometric procedure using an alternative derivatization agent. As all products purchased were intended for repeat use, three sequential exposures to the simulant were carried out. Formaldehyde was detected in the simulant exposed to 43 samples. Most of the levels found were well below the limits set in law such that 84% of the samples tested were compliant. However, eight samples had formaldehyde levels that were clearly above the legal maximum at six to 65 times the SML(T).

  17. A hybrid fuzzy logic/constraint satisfaction problem approach to automatic decision making in simulation game models.

    PubMed

    Braathen, Sverre; Sendstad, Ole Jakob

    2004-08-01

    Possible techniques for representing automatic decision-making behavior approximating human experts in complex simulation model experiments are of interest. Here, fuzzy logic (FL) and constraint satisfaction problem (CSP) methods are applied in a hybrid design of automatic decision making in simulation game models. The decision processes of a military headquarters are used as a model for the FL/CSP decision agents choice of variables and rulebases. The hybrid decision agent design is applied in two different types of simulation games to test the general applicability of the design. The first application is a two-sided zero-sum sequential resource allocation game with imperfect information interpreted as an air campaign game. The second example is a network flow stochastic board game designed to capture important aspects of land manoeuvre operations. The proposed design is shown to perform well also in this complex game with a very large (billionsize) action set. Training of the automatic FL/CSP decision agents against selected performance measures is also shown and results are presented together with directions for future research.

  18. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  19. Biaxially mechanical tuning of 2-D reversible and irreversible surface topologies through simultaneous and sequential wrinkling.

    PubMed

    Yin, Jie; Yagüe, Jose Luis; Boyce, Mary C; Gleason, Karen K

    2014-02-26

    Controlled buckling is a facile means of structuring surfaces. The resulting ordered wrinkling topologies provide surface properties and features desired for multifunctional applications. Here, we study the biaxially dynamic tuning of two-dimensional wrinkled micropatterns under cyclic mechanical stretching/releasing/restretching simultaneously or sequentially. A biaxially prestretched PDMS substrate is coated with a stiff polymer deposited by initiated chemical vapor deposition (iCVD). Applying a mechanical release/restretch cycle in two directions loaded simultaneously or sequentially to the wrinkled system results in a variety of dynamic and tunable wrinkled geometries, the evolution of which is investigated using in situ optical profilometry, numerical simulations, and theoretical modeling. Results show that restretching ordered herringbone micropatterns, created through sequential release of biaxial prestrain, leads to reversible and repeatable surface topography. The initial flat surface and the same wrinkled herringbone pattern are obtained alternatively after cyclic release/restretch processes, owing to the highly ordered structure leaving no avenue for trapping irregular topological regions during cycling as further evidenced by the uniformity of strains distributions and negligible residual strain. Conversely, restretching disordered labyrinth micropatterns created through simultaneous release shows an irreversible surface topology whether after sequential or simultaneous restretching due to creation of irregular surface topologies with regions of highly concentrated strain upon formation of the labyrinth which then lead to residual strains and trapped topologies upon cycling; furthermore, these trapped topologies depend upon the subsequent strain histories as well as the cycle. The disordered labyrinth pattern varies after each cyclic release/restretch process, presenting residual shallow patterns instead of achieving a flat state. The ability to dynamically tune the highly ordered herringbone patterning through mechanical stretching or other actuation makes these wrinkles excellent candidates for tunable multifunctional surfaces properties such as reflectivity, friction, anisotropic liquid flow or boundary layer control.

  20. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  1. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  2. Auctions with Dynamic Populations: Efficiency and Revenue Maximization

    NASA Astrophysics Data System (ADS)

    Said, Maher

    We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.

  3. SIM_EXPLORE: Software for Directed Exploration of Complex Systems

    NASA Technical Reports Server (NTRS)

    Burl, Michael; Wang, Esther; Enke, Brian; Merline, William J.

    2013-01-01

    Physics-based numerical simulation codes are widely used in science and engineering to model complex systems that would be infeasible to study otherwise. While such codes may provide the highest- fidelity representation of system behavior, they are often so slow to run that insight into the system is limited. Trying to understand the effects of inputs on outputs by conducting an exhaustive grid-based sweep over the input parameter space is simply too time-consuming. An alternative approach called "directed exploration" has been developed to harvest information from numerical simulators more efficiently. The basic idea is to employ active learning and supervised machine learning to choose cleverly at each step which simulation trials to run next based on the results of previous trials. SIM_EXPLORE is a new computer program that uses directed exploration to explore efficiently complex systems represented by numerical simulations. The software sequentially identifies and runs simulation trials that it believes will be most informative given the results of previous trials. The results of new trials are incorporated into the software's model of the system behavior. The updated model is then used to pick the next round of new trials. This process, implemented as a closed-loop system wrapped around existing simulation code, provides a means to improve the speed and efficiency with which a set of simulations can yield scientifically useful results. The software focuses on the case in which the feedback from the simulation trials is binary-valued, i.e., the learner is only informed of the success or failure of the simulation trial to produce a desired output. The software offers a number of choices for the supervised learning algorithm (the method used to model the system behavior given the results so far) and a number of choices for the active learning strategy (the method used to choose which new simulation trials to run given the current behavior model). The software also makes use of the LEGION distributed computing framework to leverage the power of a set of compute nodes. The approach has been demonstrated on a planetary science application in which numerical simulations are used to study the formation of asteroid families.

  4. Using a signal cancellation technique to assess adaptive directivity of hearing aids.

    PubMed

    Wu, Yu-Hsiang; Bentler, Ruth A

    2007-07-01

    The directivity of an adaptive directional microphone hearing aid (DMHA) cannot be assessed by the method that calls for presenting a "probe" signal from a single loudspeaker to the DMHA that moves to different angles. This method is invalid because the probe signal itself changes the polar pattern. This paper proposes a method for assessing the adaptive DMHA using a "jammer" signal, presented from a second loudspeaker rotating with the DMHA, that simulates a noise source and freezes the polar pattern. Measurement at each angle is obtained by two sequential recordings from the DMHA, one using an input of a probe and a jammer, and the other with an input of the same probe and a phase-inverted jammer. After canceling out the jammer, the remaining response to the probe signal can be used to assess the directivity. In this paper, the new method is evaluated by comparing responses from five adaptive DMHAs to different jammer intensities and locations. This method was shown to be an accurate and reliable way to assess the directivity of the adaptive DMHA in a high-intensity-jammer condition.

  5. Changes in thermo-tolerance and survival under simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 in chicken breast meat after exposure to sequential stresses.

    PubMed

    Melo, Adma Nadja Ferreira de; Souza, Geany Targino de; Schaffner, Donald; Oliveira, Tereza C Moreira de; Maciel, Janeeyre Ferreira; Souza, Evandro Leite de; Magnani, Marciane

    2017-06-19

    This study assessed changes in thermo-tolerance and capability to survive to simulated gastrointestinal conditions of Salmonella Enteritidis PT4 and Salmonella Typhimurium PT4 inoculated in chicken breast meat following exposure to stresses (cold, acid and osmotic) commonly imposed during food processing. The effects of the stress imposed by exposure to oregano (Origanum vulgare L.) essential oil (OVEO) on thermo-tolerance were also assessed. After exposure to cold stress (5°C for 5h) in chicken breast meat the test strains were sequentially exposed to the different stressing substances (lactic acid, NaCl or OVEO) at sub-lethal amounts, which were defined considering previously determined minimum inhibitory concentrations, and finally to thermal treatment (55°C for 30min). Resistant cells from distinct sequential treatments were exposed to simulated gastrointestinal conditions. The exposure to cold stress did not result in increased tolerance to acid stress (lactic acid: 5 and 2.5μL/g) for both strains. Cells of S. Typhimurium PT4 and S. Enteritidis PT4 previously exposed to acid stress showed higher (p<0.05) tolerance to osmotic stress (NaCl: 75 or 37.5mg/g) compared to non-acid-exposed cells. Exposure to osmotic stress without previous exposure to acid stress caused a salt-concentration dependent decrease in counts for both strains. Exposure to OVEO (1.25 and 0.62μL/g) decreased the acid and osmotic tolerance of both S. Enteritidis PT4 and S. Typhimurium PT4. Sequential exposure to acid and osmotic stress conditions after cold exposure increased (p<0.05) the thermo-tolerance in both strains. The cells that survived the sequential stress exposure (resistant) showed higher tolerance (p<0.05) to acidic conditions during continuous exposure (182min) to simulated gastrointestinal conditions. Resistant cells of S. Enteritidis PT4 and S. Typhimurium PT4 showed higher survival rates (p<0.05) than control cells at the end of the in vitro digestion. These results show that sequential exposure to multiple sub-lethal stresses may increase the thermo-tolerance and enhance the survival under gastrointestinal conditions of S. Enteritidis PT4 and S. Typhimurium PT4. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Direct Associations or Internal Transformations? Exploring the Mechanisms Underlying Sequential Learning Behavior

    PubMed Central

    Gureckis, Todd M.; Love, Bradley C.

    2009-01-01

    We evaluate two broad classes of cognitive mechanisms that might support the learning of sequential patterns. According to the first, learning is based on the gradual accumulation of direct associations between events based on simple conditioning principles. The other view describes learning as the process of inducing the transformational structure that defines the material. Each of these learning mechanisms predict differences in the rate of acquisition for differently organized sequences. Across a set of empirical studies, we compare the predictions of each class of model with the behavior of human subjects. We find that learning mechanisms based on transformations of an internal state, such as recurrent network architectures (e.g., Elman, 1990), have difficulty accounting for the pattern of human results relative to a simpler (but more limited) learning mechanism based on learning direct associations. Our results suggest new constraints on the cognitive mechanisms supporting sequential learning behavior. PMID:20396653

  7. Hierarchical Chunking of Sequential Memory on Neuromorphic Architecture with Reduced Synaptic Plasticity

    PubMed Central

    Li, Guoqi; Deng, Lei; Wang, Dong; Wang, Wei; Zeng, Fei; Zhang, Ziyang; Li, Huanglong; Song, Sen; Pei, Jing; Shi, Luping

    2016-01-01

    Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture. PMID:28066223

  8. Recent advances in lossless coding techniques

    NASA Astrophysics Data System (ADS)

    Yovanof, Gregory S.

    Current lossless techniques are reviewed with reference to both sequential data files and still images. Two major groups of sequential algorithms, dictionary and statistical techniques, are discussed. In particular, attention is given to Lempel-Ziv coding, Huffman coding, and arithmewtic coding. The subject of lossless compression of imagery is briefly discussed. Finally, examples of practical implementations of lossless algorithms and some simulation results are given.

  9. Analyzing multicomponent receptive fields from neural responses to natural stimuli

    PubMed Central

    Rowekamp, Ryan; Sharpee, Tatyana O

    2011-01-01

    The challenge of building increasingly better models of neural responses to natural stimuli is to accurately estimate the multiple stimulus features that may jointly affect the neural spike probability. The selectivity for combinations of features is thought to be crucial for achieving classical properties of neural responses such as contrast invariance. The joint search for these multiple stimulus features is difficult because estimating spike probability as a multidimensional function of stimulus projections onto candidate relevant dimensions is subject to the curse of dimensionality. An attractive alternative is to search for relevant dimensions sequentially, as in projection pursuit regression. Here we demonstrate using analytic arguments and simulations of model cells that different types of sequential search strategies exhibit systematic biases when used with natural stimuli. Simulations show that joint optimization is feasible for up to three dimensions with current algorithms. When applied to the responses of V1 neurons to natural scenes, models based on three jointly optimized dimensions had better predictive power in a majority of cases compared to dimensions optimized sequentially, with different sequential methods yielding comparable results. Thus, although the curse of dimensionality remains, at least several relevant dimensions can be estimated by joint information maximization. PMID:21780916

  10. Robust Electrical Transfer System (RETS) for Solar Array Drive Mechanism SlipRing Assembly

    NASA Astrophysics Data System (ADS)

    Bommottet, Daniel; Bossoney, Luc; Schnyder, Ralph; Howling, Alan; Hollenstein, Christoph

    2013-09-01

    Demands for robust and reliable power transmission systems for sliprings for SADM (Solar Array Drive Mechanism) are increasing steadily. As a consequence, it is required to know their performances regarding the voltage breakdown limit.An understanding of the overall shape of the breakdown voltage versus pressure curve is established, based on experimental measurements of DC (Direct Current) gas breakdown in complex geometries compared with a numerical simulation model.In addition a detailed study was made of the functional behaviour of an entire wing of satellite in a like- operational mode, comprising the solar cells, the power transmission lines, the SRA (SlipRing Assembly), the power S3R (Sequential Serial/shunt Switching Regulators) and the satellite load to simulate the electrical power consumption.A test bench able to measure automatically the: a)breakdown voltage versus pressure curve and b)the functional switching performances, was developed and validated.

  11. Development of a standardized sequential extraction protocol for simultaneous extraction of multiple actinide elements

    DOE PAGES

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.; ...

    2017-02-07

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  12. Sequential EMT-MET induces neuronal conversion through Sox2

    PubMed Central

    He, Songwei; Chen, Jinlong; Zhang, Yixin; Zhang, Mengdan; Yang, Xiao; Li, Yuan; Sun, Hao; Lin, Lilong; Fan, Ke; Liang, Lining; Feng, Chengqian; Wang, Fuhui; Zhang, Xiao; Guo, Yiping; Pei, Duanqing; Zheng, Hui

    2017-01-01

    Direct neuronal conversion can be achieved with combinations of small-molecule compounds and growth factors. Here, by studying the first or induction phase of the neuronal conversion induced by defined 5C medium, we show that the Sox2-mediated switch from early epithelial–mesenchymal transition (EMT) to late mesenchymal–epithelial transition (MET) within a high proliferation context is essential and sufficient for the conversion from mouse embryonic fibroblasts (MEFs) to TuJ+ cells. At the early stage, insulin and basic fibroblast growth factor (bFGF)-induced cell proliferation, early EMT, the up-regulation of Stat3 and Sox2, and the subsequent activation of neuron projection. Up-regulated Sox2 then induced MET and directed cells towards a neuronal fate at the late stage. Inhibiting either stage of this sequential EMT-MET impaired the conversion. In addition, Sox2 could replace sequential EMT-MET to induce a similar conversion within a high proliferation context, and its functions were confirmed with other neuronal conversion protocols and MEFs reprogramming. Therefore, the critical roles of the sequential EMT-MET were implicated in direct cell fate conversion in addition to reprogramming, embryonic development and cancer progression. PMID:28580167

  13. Program For Parallel Discrete-Event Simulation

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.

    1991-01-01

    User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.

  14. Dry minor mergers and size evolution of high-z compact massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Oogi, Taira; Habe, Asao

    2013-01-01

    Recent observations show evidence that high-z (z ˜ 2-3) early-type galaxies (ETGs) are more compact than those with comparable mass at z ˜ 0. Such size evolution is most likely explained by the `dry merger sceanario'. However, previous studies based on this scenario cannot consistently explain the properties of both high-z compact massive ETGs and local ETGs. We investigate the effect of multiple sequential dry minor mergers on the size evolution of compact massive ETGs. From an analysis of the Millennium Simulation Data Base, we show that such minor (stellar mass ratio M2/M1 < 1/4) mergers are extremely common during hierarchical structure formation. We perform N-body simulations of sequential minor mergers with parabolic and head-on orbits, including a dark matter component and a stellar component. Typical mass ratios of these minor mergers are 1/20 < M2/M1 ≤q 1/10. We show that sequential minor mergers of compact satellite galaxies are the most efficient at promoting size growth and decreasing the velocity dispersion of compact massive ETGs in our simulations. The change of stellar size and density of the merger remnants is consistent with recent observations. Furthermore, we construct the merger histories of candidates for high-z compact massive ETGs using the Millennium Simulation Data Base and estimate the size growth of the galaxies through the dry minor merger scenario. We can reproduce the mean size growth factor between z = 2 and z = 0, assuming the most efficient size growth obtained during sequential minor mergers in our simulations. However, we note that our numerical result is only valid for merger histories with typical mass ratios between 1/20 and 1/10 with parabolic and head-on orbits and that our most efficient size-growth efficiency is likely an upper limit.

  15. The use of sequential indicator simulation to characterize geostatistical uncertainty; Yucca Mountain Site Characterization Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, K.M.

    1992-10-01

    Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less

  16. Mapping of compositional properties of coal using isometric log-ratio transformation and sequential Gaussian simulation - A comparative study for spatial ultimate analyses data.

    PubMed

    Karacan, C Özgen; Olea, Ricardo A

    2018-03-01

    Chemical properties of coal largely determine coal handling, processing, beneficiation methods, and design of coal-fired power plants. Furthermore, these properties impact coal strength, coal blending during mining, as well as coal's gas content, which is important for mining safety. In order for these processes and quantitative predictions to be successful, safer, and economically feasible, it is important to determine and map chemical properties of coals accurately in order to infer these properties prior to mining. Ultimate analysis quantifies principal chemical elements in coal. These elements are C, H, N, S, O, and, depending on the basis, ash, and/or moisture. The basis for the data is determined by the condition of the sample at the time of analysis, with an "as-received" basis being the closest to sampling conditions and thus to the in-situ conditions of the coal. The parts determined or calculated as the result of ultimate analyses are compositions, reported in weight percent, and pose the challenges of statistical analyses of compositional data. The treatment of parts using proper compositional methods may be even more important in mapping them, as most mapping methods carry uncertainty due to partial sampling as well. In this work, we map the ultimate analyses parts of the Springfield coal from an Indiana section of the Illinois basin, USA, using sequential Gaussian simulation of isometric log-ratio transformed compositions. We compare the results with those of direct simulations of compositional parts. We also compare the implications of these approaches in calculating other properties using correlations to identify the differences and consequences. Although the study here is for coal, the methods described in the paper are applicable to any situation involving compositional data and its mapping.

  17. An Investigation of University Students' Collaborative Inquiry Learning Behaviors in an Augmented Reality Simulation and a Traditional Simulation

    ERIC Educational Resources Information Center

    Wang, Hung-Yuan; Duh, Henry Been-Lirn; Li, Nai; Lin, Tzung-Jin; Tsai, Chin-Chung

    2014-01-01

    The purpose of this study is to investigate and compare students' collaborative inquiry learning behaviors and their behavior patterns in an augmented reality (AR) simulation system and a traditional 2D simulation system. Their inquiry and discussion processes were analyzed by content analysis and lag sequential analysis (LSA). Forty…

  18. An Extension of a Parallel-Distributed Processing Framework of Reading Aloud in Japanese: Human Nonword Reading Accuracy Does Not Require a Sequential Mechanism

    ERIC Educational Resources Information Center

    Ikeda, Kenji; Ueno, Taiji; Ito, Yuichi; Kitagami, Shinji; Kawaguchi, Jun

    2017-01-01

    Humans can pronounce a nonword (e.g., rint). Some researchers have interpreted this behavior as requiring a sequential mechanism by which a grapheme-phoneme correspondence rule is applied to each grapheme in turn. However, several parallel-distributed processing (PDP) models in English have simulated human nonword reading accuracy without a…

  19. Time-dependent Data System (TDDS); an interactive program to assemble, manage, and appraise input data and numerical output of flow/transport simulation models

    USGS Publications Warehouse

    Regan, R.S.; Schaffranek, R.W.; Baltzer, R.A.

    1996-01-01

    A system of functional utilities and computer routines, collectively identified as the Time-Dependent Data System CI DDS), has been developed and documented by the U.S. Geological Survey. The TDDS is designed for processing time sequences of discrete, fixed-interval, time-varying geophysical data--in particular, hydrologic data. Such data include various, dependent variables and related parameters typically needed as input for execution of one-, two-, and three-dimensional hydrodynamic/transport and associated water-quality simulation models. Such data can also include time sequences of results generated by numerical simulation models. Specifically, TDDS provides the functional capabilities to process, store, retrieve, and compile data in a Time-Dependent Data Base (TDDB) in response to interactive user commands or pre-programmed directives. Thus, the TDDS, in conjunction with a companion TDDB, provides a ready means for processing, preparation, and assembly of time sequences of data for input to models; collection, categorization, and storage of simulation results from models; and intercomparison of field data and simulation results. The TDDS can be used to edit and verify prototype, time-dependent data to affirm that selected sequences of data are accurate, contiguous, and appropriate for numerical simulation modeling. It can be used to prepare time-varying data in a variety of formats, such as tabular lists, sequential files, arrays, graphical displays, as well as line-printer plots of single or multiparameter data sets. The TDDB is organized and maintained as a direct-access data base by the TDDS, thus providing simple, yet efficient, data management and access. A single, easily used, program interface that provides all access to and from a particular TDDB is available for use directly within models, other user-provided programs, and other data systems. This interface, together with each major functional utility of the TDDS, is described and documented in this report.

  20. Sprocket- Chain Simulation: Modelling and Simulation of a Multi Physics problem by sequentially coupling MotionSolve and nanoFluidX

    NASA Astrophysics Data System (ADS)

    Jayanthi, Aditya; Coker, Christopher

    2016-11-01

    In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.

  1. Performance Modeling and Measurement of Parallelized Code for Distributed Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry

    1998-01-01

    This paper presents a model to evaluate the performance and overhead of parallelizing sequential code using compiler directives for multiprocessing on distributed shared memory (DSM) systems. With increasing popularity of shared address space architectures, it is essential to understand their performance impact on programs that benefit from shared memory multiprocessing. We present a simple model to characterize the performance of programs that are parallelized using compiler directives for shared memory multiprocessing. We parallelized the sequential implementation of NAS benchmarks using native Fortran77 compiler directives for an Origin2000, which is a DSM system based on a cache-coherent Non Uniform Memory Access (ccNUMA) architecture. We report measurement based performance of these parallelized benchmarks from four perspectives: efficacy of parallelization process; scalability; parallelization overhead; and comparison with hand-parallelized and -optimized version of the same benchmarks. Our results indicate that sequential programs can conveniently be parallelized for DSM systems using compiler directives but realizing performance gains as predicted by the performance model depends primarily on minimizing architecture-specific data locality overhead.

  2. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  3. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  4. Understanding and simulating the material behavior during multi-particle irradiations

    PubMed Central

    Mir, Anamul H.; Toulemonde, M.; Jegou, C.; Miro, S.; Serruys, Y.; Bouffard, S.; Peuget, S.

    2016-01-01

    A number of studies have suggested that the irradiation behavior and damage processes occurring during sequential and simultaneous particle irradiations can significantly differ. Currently, there is no definite answer as to why and when such differences are seen. Additionally, the conventional multi-particle irradiation facilities cannot correctly reproduce the complex irradiation scenarios experienced in a number of environments like space and nuclear reactors. Therefore, a better understanding of multi-particle irradiation problems and possible alternatives are needed. This study shows ionization induced thermal spike and defect recovery during sequential and simultaneous ion irradiation of amorphous silica. The simultaneous irradiation scenario is shown to be equivalent to multiple small sequential irradiation scenarios containing latent damage formation and recovery mechanisms. The results highlight the absence of any new damage mechanism and time-space correlation between various damage events during simultaneous irradiation of amorphous silica. This offers a new and convenient way to simulate and understand complex multi-particle irradiation problems. PMID:27466040

  5. Direct quantum process tomography via measuring sequential weak values of incompatible observables.

    PubMed

    Kim, Yosep; Kim, Yong-Su; Lee, Sang-Yun; Han, Sang-Wook; Moon, Sung; Kim, Yoon-Ho; Cho, Young-Wook

    2018-01-15

    The weak value concept has enabled fundamental studies of quantum measurement and, recently, found potential applications in quantum and classical metrology. However, most weak value experiments reported to date do not require quantum mechanical descriptions, as they only exploit the classical wave nature of the physical systems. In this work, we demonstrate measurement of the sequential weak value of two incompatible observables by making use of two-photon quantum interference so that the results can only be explained quantum physically. We then demonstrate that the sequential weak value measurement can be used to perform direct quantum process tomography of a qubit channel. Our work not only demonstrates the quantum nature of weak values but also presents potential new applications of weak values in analyzing quantum channels and operations.

  6. Solar wind interaction with Venus and Mars in a parallel hybrid code

    NASA Astrophysics Data System (ADS)

    Jarvinen, Riku; Sandroos, Arto

    2013-04-01

    We discuss the development and applications of a new parallel hybrid simulation, where ions are treated as particles and electrons as a charge-neutralizing fluid, for the interaction between the solar wind and Venus and Mars. The new simulation code under construction is based on the algorithm of the sequential global planetary hybrid model developed at the Finnish Meteorological Institute (FMI) and on the Corsair parallel simulation platform also developed at the FMI. The FMI's sequential hybrid model has been used for studies of plasma interactions of several unmagnetized and weakly magnetized celestial bodies for more than a decade. Especially, the model has been used to interpret in situ particle and magnetic field observations from plasma environments of Mars, Venus and Titan. Further, Corsair is an open source MPI (Message Passing Interface) particle and mesh simulation platform, mainly aimed for simulations of diffusive shock acceleration in solar corona and interplanetary space, but which is now also being extended for global planetary hybrid simulations. In this presentation we discuss challenges and strategies of parallelizing a legacy simulation code as well as possible applications and prospects of a scalable parallel hybrid model for the solar wind interactions of Venus and Mars.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.

    Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.

  8. Analyzing Communication Architectures Using Commercial Off-The-Shelf (COTS) Modeling and Simulation Tools

    DTIC Science & Technology

    1998-06-01

    4] By 2010, we should be able to change how we conduct the most intense joint operations. Instead of relying on massed forces and sequential ...not independent, sequential steps. Data probes to support the analysis phase were required to complete the logical models. This generated a need...Networks) Identify Granularity (System Level) - Establish Physical Bounds or Limits to Systems • Determine System Test Configuration and Lineup

  9. Spatial interpolation of forest conditions using co-conditional geostatistical simulation

    Treesearch

    H. Todd Mowrer

    2000-01-01

    In recent work the author used the geostatistical Monte Carlo technique of sequential Gaussian simulation (s.G.s.) to investigate uncertainty in a GIS analysis of potential old-growth forest areas. The current study compares this earlier technique to that of co-conditional simulation, wherein the spatial cross-correlations between variables are included. As in the...

  10. Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damiani, Rick; Wendt, Fabian; Musial, Walter

    The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less

  11. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  12. Parallelization of NAS Benchmarks for Shared Memory Multiprocessors

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)

    1998-01-01

    This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.

  13. Multitarget tracking in cluttered environment for a multistatic passive radar system under the DAB/DVB network

    NASA Astrophysics Data System (ADS)

    Shi, Yi Fang; Park, Seung Hyo; Song, Taek Lyul

    2017-12-01

    The target tracking using multistatic passive radar in a digital audio/video broadcast (DAB/DVB) network with illuminators of opportunity faces two main challenges: the first challenge is that one has to solve the measurement-to-illuminator association ambiguity in addition to the conventional association ambiguity between the measurements and targets, which introduces a significantly complex three-dimensional (3-D) data association problem among the target-measurement illuminator, this is because all the illuminators transmit the same carrier frequency signals and signals transmitted by different illuminators but reflected via the same target become indistinguishable; the other challenge is that only the bistatic range and range-rate measurements are available while the angle information is unavailable or of very poor quality. In this paper, the authors propose a new target tracking algorithm directly in three-dimensional (3-D) Cartesian coordinates with the capability of track management using the probability of target existence as a track quality measure. The proposed algorithm is termed sequential processing-joint integrated probabilistic data association (SP-JIPDA), which applies the modified sequential processing technique to resolve the additional association ambiguity between measurements and illuminators. The SP-JIPDA algorithm sequentially operates the JIPDA tracker to update each track for each illuminator with all the measurements in the common measurement set at each time. For reasons of fair comparison, the existing modified joint probabilistic data association (MJPDA) algorithm that addresses the 3-D data association problem via "supertargets" using gate grouping and provides tracks directly in 3-D Cartesian coordinates, is enhanced by incorporating the probability of target existence as an effective track quality measure for track management. Both algorithms deal with nonlinear observations using the extended Kalman filtering. A simulation study is performed to verify the superiority of the proposed SP-JIPDA algorithm over the MJIPDA in this multistatic passive radar system.

  14. An energy function for dynamics simulations of polypeptides in torsion angle space

    NASA Astrophysics Data System (ADS)

    Sartori, F.; Melchers, B.; Böttcher, H.; Knapp, E. W.

    1998-05-01

    Conventional simulation techniques to model the dynamics of proteins in atomic detail are restricted to short time scales. A simplified molecular description, in which high frequency motions with small amplitudes are ignored, can overcome this problem. In this protein model only the backbone dihedrals φ and ψ and the χi of the side chains serve as degrees of freedom. Bond angles and lengths are fixed at ideal geometry values provided by the standard molecular dynamics (MD) energy function CHARMM. In this work a Monte Carlo (MC) algorithm is used, whose elementary moves employ cooperative rotations in a small window of consecutive amide planes, leaving the polypeptide conformation outside of this window invariant. A single of these window MC moves generates local conformational changes only. But, the application of many such moves at different parts of the polypeptide backbone leads to global conformational changes. To account for the lack of flexibility in the protein model employed, the energy function used to evaluate conformational energies is split into sequentially neighbored and sequentially distant contributions. The sequentially neighbored part is represented by an effective (φ,ψ)-torsion potential. It is derived from MD simulations of a flexible model dipeptide using a conventional MD energy function. To avoid exaggeration of hydrogen bonding strengths, the electrostatic interactions involving hydrogen atoms are scaled down at short distances. With these adjustments of the energy function, the rigid polypeptide model exhibits the same equilibrium distributions as obtained by conventional MD simulation with a fully flexible molecular model. Also, the same temperature dependence of the stability and build-up of α helices of 18-alanine as found in MD simulations is observed using the adapted energy function for MC simulations. Analyses of transition frequencies demonstrate that also dynamical aspects of MD trajectories are faithfully reproduced. Finally, it is demonstrated that even for high temperature unfolded polypeptides the MC simulation is more efficient by a factor of 10 than conventional MD simulations.

  15. Novel high-fidelity realistic explosion damage simulation for urban environments

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoqing; Yadegar, Jacob; Zhu, Youding; Raju, Chaitanya; Bhagavathula, Jaya

    2010-04-01

    Realistic building damage simulation has a significant impact in modern modeling and simulation systems especially in diverse panoply of military and civil applications where these simulation systems are widely used for personnel training, critical mission planning, disaster management, etc. Realistic building damage simulation should incorporate accurate physics-based explosion models, rubble generation, rubble flyout, and interactions between flying rubble and their surrounding entities. However, none of the existing building damage simulation systems sufficiently faithfully realize the criteria of realism required for effective military applications. In this paper, we present a novel physics-based high-fidelity and runtime efficient explosion simulation system to realistically simulate destruction to buildings. In the proposed system, a family of novel blast models is applied to accurately and realistically simulate explosions based on static and/or dynamic detonation conditions. The system also takes account of rubble pile formation and applies a generic and scalable multi-component based object representation to describe scene entities and highly scalable agent-subsumption architecture and scheduler to schedule clusters of sequential and parallel events. The proposed system utilizes a highly efficient and scalable tetrahedral decomposition approach to realistically simulate rubble formation. Experimental results demonstrate that the proposed system has the capability to realistically simulate rubble generation, rubble flyout and their primary and secondary impacts on surrounding objects including buildings, constructions, vehicles and pedestrians in clusters of sequential and parallel damage events.

  16. Gstat: a program for geostatistical modelling, prediction and simulation

    NASA Astrophysics Data System (ADS)

    Pebesma, Edzer J.; Wesseling, Cees G.

    1998-01-01

    Gstat is a computer program for variogram modelling, and geostatistical prediction and simulation. It provides a generic implementation of the multivariable linear model with trends modelled as a linear function of coordinate polynomials or of user-defined base functions, and independent or dependent, geostatistically modelled, residuals. Simulation in gstat comprises conditional or unconditional (multi-) Gaussian sequential simulation of point values or block averages, or (multi-) indicator sequential simulation. Besides many of the popular options found in other geostatistical software packages, gstat offers the unique combination of (i) an interactive user interface for modelling variograms and generalized covariances (residual variograms), that uses the device-independent plotting program gnuplot for graphical display, (ii) support for several ascii and binary data and map file formats for input and output, (iii) a concise, intuitive and flexible command language, (iv) user customization of program defaults, (v) no built-in limits, and (vi) free, portable ANSI-C source code. This paper describes the class of problems gstat can solve, and addresses aspects of efficiency and implementation, managing geostatistical projects, and relevant technical details.

  17. MaMiCo: Software design for parallel molecular-continuum flow simulations

    NASA Astrophysics Data System (ADS)

    Neumann, Philipp; Flohr, Hanno; Arora, Rahul; Jarmatz, Piet; Tchipev, Nikola; Bungartz, Hans-Joachim

    2016-03-01

    The macro-micro-coupling tool (MaMiCo) was developed to ease the development of and modularize molecular-continuum simulations, retaining sequential and parallel performance. We demonstrate the functionality and performance of MaMiCo by coupling the spatially adaptive Lattice Boltzmann framework waLBerla with four molecular dynamics (MD) codes: the light-weight Lennard-Jones-based implementation SimpleMD, the node-level optimized software ls1 mardyn, and the community codes ESPResSo and LAMMPS. We detail interface implementations to connect each solver with MaMiCo. The coupling for each waLBerla-MD setup is validated in three-dimensional channel flow simulations which are solved by means of a state-based coupling method. We provide sequential and strong scaling measurements for the four molecular-continuum simulations. The overhead of MaMiCo is found to come at 10%-20% of the total (MD) runtime. The measurements further show that scalability of the hybrid simulations is reached on up to 500 Intel SandyBridge, and more than 1000 AMD Bulldozer compute cores.

  18. Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method

    NASA Technical Reports Server (NTRS)

    Kowal, Michael T.

    1997-01-01

    The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.

  19. Sequential lineups: shift in criterion or decision strategy?

    PubMed

    Gronlund, Scott D

    2004-04-01

    R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.

  20. Cost-effectiveness of simultaneous versus sequential surgery in head and neck reconstruction.

    PubMed

    Wong, Kevin K; Enepekides, Danny J; Higgins, Kevin M

    2011-02-01

    To determine whether simultaneous (ablation and reconstruction overlaps by two teams) head and neck reconstruction is cost effective compared to sequentially (ablation followed by reconstruction) performed surgery. Case-controlled study. Tertiary care hospital. Oncology patients undergoing free flap reconstruction of the head and neck. A match paired comparison study was performed with a retrospective chart review examining the total time of surgery for sequential and simultaneous surgery. Nine patients were selected for both the sequential and simultaneous groups. Sequential head and neck reconstruction patients were pair matched with patients who had undergone similar oncologic ablative or reconstructive procedures performed in a simultaneous fashion. A detailed cost analysis using the microcosting method was then undertaken looking at the direct costs of the surgeons, anesthesiologist, operating room, and nursing. On average, simultaneous surgery required 3 hours 15 minutes less operating time, leading to a cost savings of approximately $1200/case when compared to sequential surgery. This represents approximately a 15% reduction in the cost of the entire operation. Simultaneous head and neck reconstruction is more cost effective when compared to sequential surgery.

  1. Performance analysis and comparison of a minimum interconnections direct storage model with traditional neural bidirectional memories.

    PubMed

    Bhatti, A Aziz

    2009-12-01

    This study proposes an efficient and improved model of a direct storage bidirectional memory, improved bidirectional associative memory (IBAM), and emphasises the use of nanotechnology for efficient implementation of such large-scale neural network structures at a considerable lower cost reduced complexity, and less area required for implementation. This memory model directly stores the X and Y associated sets of M bipolar binary vectors in the form of (MxN(x)) and (MxN(y)) memory matrices, requires O(N) or about 30% of interconnections with weight strength ranging between +/-1, and is computationally very efficient as compared to sequential, intraconnected and other bidirectional associative memory (BAM) models of outer-product type that require O(N(2)) complex interconnections with weight strength ranging between +/-M. It is shown that it is functionally equivalent to and possesses all attributes of a BAM of outer-product type, and yet it is simple and robust in structure, very large scale integration (VLSI), optical and nanotechnology realisable, modular and expandable neural network bidirectional associative memory model in which the addition or deletion of a pair of vectors does not require changes in the strength of interconnections of the entire memory matrix. The analysis of retrieval process, signal-to-noise ratio, storage capacity and stability of the proposed model as well as of the traditional BAM has been carried out. Constraints on and characteristics of unipolar and bipolar binaries for improved storage and retrieval are discussed. The simulation results show that it has log(e) N times higher storage capacity, superior performance, faster convergence and retrieval time, when compared to traditional sequential and intraconnected bidirectional memories.

  2. Spacecraft Data Simulator for the test of level zero processing systems

    NASA Technical Reports Server (NTRS)

    Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem

    1994-01-01

    The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.

  3. A parallel computational model for GATE simulations.

    PubMed

    Rannou, F R; Vega-Acevedo, N; El Bitar, Z

    2013-12-01

    GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  4. A sequential coalescent algorithm for chromosomal inversions

    PubMed Central

    Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M

    2013-01-01

    Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894

  5. Simulations and experiments of aperiodic and multiplexed gratings in volume holographic imaging systems

    PubMed Central

    Luo, Yuan; Castro, Jose; Barton, Jennifer K.; Kostuk, Raymond K.; Barbastathis, George

    2010-01-01

    A new methodology describing the effects of aperiodic and multiplexed gratings in volume holographic imaging systems (VHIS) is presented. The aperiodic gratings are treated as an ensemble of localized planar gratings using coupled wave methods in conjunction with sequential and non-sequential ray-tracing techniques to accurately predict volumetric diffraction effects in VHIS. Our approach can be applied to aperiodic, multiplexed gratings and used to theoretically predict the performance of multiplexed volume holographic gratings within a volume hologram for VHIS. We present simulation and experimental results for the aperiodic and multiplexed imaging gratings formed in PQ-PMMA at 488nm and probed with a spherical wave at 633nm. Simulation results based on our approach that can be easily implemented in ray-tracing packages such as Zemax® are confirmed with experiments and show proof of consistency and usefulness of the proposed models. PMID:20940823

  6. A geochemical transport model for redox-controlled movement of mineral fronts in groundwater flow systems: A case of nitrate removal by oxidation of pyrite

    USGS Publications Warehouse

    Engesgaard, Peter; Kipp, Kenneth L.

    1992-01-01

    A one-dimensional prototype geochemical transport model was developed in order to handle simultaneous precipitation-dissolution and oxidation-reduction reactions governed by chemical equilibria. Total aqueous component concentrations are the primary dependent variables, and a sequential iterative approach is used for the calculation. The model was verified by analytical and numerical comparisons and is able to simulate sharp mineral fronts. At a site in Denmark, denitrification has been observed by oxidation of pyrite. Simulation of nitrate movement at this site showed a redox front movement rate of 0.58 m yr−1, which agreed with calculations of others. It appears that the sequential iterative approach is the most practical for extension to multidimensional simulation and for handling large numbers of components and reactions. However, slow convergence may limit the size of redox systems that can be handled.

  7. Use of Computer Simulation in Designing and Evaluating a Proposed Rough Mill for Furniture Interior Parts

    Treesearch

    Philip A. Araman

    1977-01-01

    The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...

  8. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  9. An analog scrambler for speech based on sequential permutations in time and frequency

    NASA Astrophysics Data System (ADS)

    Cox, R. V.; Jayant, N. S.; McDermott, B. J.

    Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.

  10. Performance evaluation of an asynchronous multisensor track fusion filter

    NASA Astrophysics Data System (ADS)

    Alouani, Ali T.; Gray, John E.; McCabe, D. H.

    2003-08-01

    Recently the authors developed a new filter that uses data generated by asynchronous sensors to produce a state estimate that is optimal in the minimum mean square sense. The solution accounts for communications delay between sensors platform and fusion center. It also deals with out of sequence data as well as latent data by processing the information in a batch-like manner. This paper compares, using simulated targets and Monte Carlo simulations, the performance of the filter to the optimal sequential processing approach. It was found that the new asynchronous Multisensor track fusion filter (AMSTFF) performance is identical to that of the extended sequential Kalman filter (SEKF), while the new filter updates its track at a much lower rate than the SEKF.

  11. Reversible logic gates on Physarum Polycephalum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Andrew

    2015-03-10

    In this paper, we consider possibilities how to implement asynchronous sequential logic gates and quantum-style reversible logic gates on Physarum polycephalum motions. We show that in asynchronous sequential logic gates we can erase information because of uncertainty in the direction of plasmodium propagation. Therefore quantum-style reversible logic gates are more preferable for designing logic circuits on Physarum polycephalum.

  12. Simultaneous and Sequential Feature Negative Discriminations: Elemental Learning and Occasion Setting in Human Pavlovian Conditioning

    ERIC Educational Resources Information Center

    Baeyens, Frank; Vervliet, Bram; Vansteenwegen, Debora; Beckers, Tom; Hermans, Dirk; Eelen, Paul

    2004-01-01

    Using a conditioned suppression task, we investigated simultaneous (XA-/A+) vs. sequential (X [right arrow] A-/A+) Feature Negative (FN) discrimination learning in humans. We expected the simultaneous discrimination to result in X (or alternatively the XA configuration) becoming an inhibitor acting directly on the US, and the sequential…

  13. Geostatistical conditional simulation for the assessment of contaminated land by abandoned heavy metal mining.

    PubMed

    Ersoy, Adem; Yunsel, Tayfun Yusuf; Atici, Umit

    2008-02-01

    Abandoned mine workings can undoubtedly cause varying degrees of contamination of soil with heavy metals such as lead and zinc has occurred on a global scale. Exposure to these elements may cause to harm human health and environment. In the study, a total of 269 soil samples were collected at 1, 5, and 10 m regular grid intervals of 100 x 100 m area of Carsington Pasture in the UK. Cell declustering technique was applied to the data set due to no statistical representativity. Directional experimental semivariograms of the elements for the transformed data showed that both geometric and zonal anisotropy exists in the data. The most evident spatial dependence structure of the continuity for the directional experimental semivariogram, characterized by spherical and exponential models of Pb and Zn were obtained. This study reports the spatial distribution and uncertainty of Pb and Zn concentrations in soil at the study site using a probabilistic approach. The approach was based on geostatistical sequential Gaussian simulation (SGS), which is used to yield a series of conditional images characterized by equally probable spatial distributions of the heavy elements concentrations across the area. Postprocessing of many simulations allowed the mapping of contaminated and uncontaminated areas, and provided a model for the uncertainty in the spatial distribution of element concentrations. Maps of the simulated Pb and Zn concentrations revealed the extent and severity of contamination. SGS was validated by statistics, histogram, variogram reproduction, and simulation errors. The maps of the elements might be used in the remediation studies, help decision-makers and others involved in the abandoned heavy metal mining site in the world.

  14. Simulation modeling analysis of sequential relations among therapeutic alliance, symptoms, and adherence to child-centered play therapy between a child with autism spectrum disorder and two therapists.

    PubMed

    Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura

    2017-07-01

    This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.

  15. A novel method for the sequential removal and separation of multiple heavy metals from wastewater.

    PubMed

    Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang

    2018-01-15

    A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Elasticity and photoelasticity relationships for polyethylene terephthalate fiber networks by molecular simulation

    NASA Astrophysics Data System (ADS)

    Nayak, Kapileswar; Das, Sushanta; Nanavati, Hemant

    2008-01-01

    We present a framework for the development of elasticity and photoelasticity relationships for polyethylene terephthalate fiber networks, incorporating aspects of the primary molecular structure. Semicrystalline polymeric fiber networks are modeled as sequentially arranged crystalline and amorphous regions. Rotational isomeric states-Monte Carlo simulations of amorphous chains of up to 360 bonds (degree of polymerization, DP =60), confined between and bridging infinite impenetrable crystalline walls, have been characterized by Ω, the probability density of the intercrystal separation h, and Δβ, the polarizability anisotropy. lnΩ and Δβ have been modeled as functions of h, yielding the chain deformation relationships. The development has been extended to the fiber network to yield the photoelasticity relationships. We execute our framework by fitting to experimental stress-elongation data and employing the single fitted parameter to directly predict the birefringence-elongation behavior, without any further fitting. Incorporating the effect of strain-induced crystallization into the framework makes it physically more meaningful and yields accurate predictions of the birefringence-elongation behavior.

  17. Formation and emission mechanisms of Ag nanoclusters in the Ar matrix assembly cluster source

    NASA Astrophysics Data System (ADS)

    Zhao, Junlei; Cao, Lu; Palmer, Richard E.; Nordlund, Kai; Djurabekova, Flyura

    2017-11-01

    In this paper, we study the mechanisms of growth of Ag nanoclusters in a solid Ar matrix and the emission of these nanoclusters from the matrix by a combination of experimental and theoretical methods. The molecular dynamics simulations show that the cluster growth mechanism can be described as "thermal spike-enhanced clustering" in multiple sequential ion impact events. We further show that experimentally observed large sputtered metal clusters cannot be formed by direct sputtering of Ag mixed in the Ar. Instead, we describe the mechanism of emission of the metal nanocluster that, at first, is formed in the cryogenic matrix due to multiple ion impacts, and then is emitted as a result of the simultaneous effects of interface boiling and spring force. We also develop an analytical model describing this size-dependent cluster emission. The model bridges the atomistic simulations and experimental time and length scales, and allows increasing the controllability of fast generation of nanoclusters in experiments with a high production rate.

  18. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  19. The simulation model of growth and cell divisions for the root apex with an apical cell in application to Azolla pinnata.

    PubMed

    Piekarska-Stachowiak, Anna; Nakielski, Jerzy

    2013-12-01

    In contrast to seed plants, the roots of most ferns have a single apical cell which is the ultimate source of all cells in the root. The apical cell has a tetrahedral shape and divides asymmetrically. The root cap derives from the distal division face, while merophytes derived from three proximal division faces contribute to the root proper. The merophytes are produced sequentially forming three sectors along a helix around the root axis. During development, they divide and differentiate in a predictable pattern. Such growth causes cell pattern of the root apex to be remarkably regular and self-perpetuating. The nature of this regularity remains unknown. This paper shows the 2D simulation model for growth of the root apex with the apical cell in application to Azolla pinnata. The field of growth rates of the organ, prescribed by the model, is of a tensor type (symplastic growth) and cells divide taking principal growth directions into account. The simulations show how the cell pattern in a longitudinal section of the apex develops in time. The virtual root apex grows realistically and its cell pattern is similar to that observed in anatomical sections. The simulations indicate that the cell pattern regularity results from cell divisions which are oriented with respect to principal growth directions. Such divisions are essential for maintenance of peri-anticlinal arrangement of cell walls and coordinated growth of merophytes during the development. The highly specific division program that takes place in merophytes prior to differentiation seems to be regulated at the cellular level.

  20. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  1. A Robust Real Time Direction-of-Arrival Estimation Method for Sequential Movement Events of Vehicles.

    PubMed

    Liu, Huawei; Li, Baoqing; Yuan, Xiaobing; Zhou, Qianwei; Huang, Jingchang

    2018-03-27

    Parameters estimation of sequential movement events of vehicles is facing the challenges of noise interferences and the demands of portable implementation. In this paper, we propose a robust direction-of-arrival (DOA) estimation method for the sequential movement events of vehicles based on a small Micro-Electro-Mechanical System (MEMS) microphone array system. Inspired by the incoherent signal-subspace method (ISM), the method that is proposed in this work employs multiple sub-bands, which are selected from the wideband signals with high magnitude-squared coherence to track moving vehicles in the presence of wind noise. The field test results demonstrate that the proposed method has a better performance in emulating the DOA of a moving vehicle even in the case of severe wind interference than the narrowband multiple signal classification (MUSIC) method, the sub-band DOA estimation method, and the classical two-sided correlation transformation (TCT) method.

  2. Directional and Spectral Irradiance in Ocean Models: Effects on Simulated Global Phytoplankton, Nutrients, and Primary Production

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Rousseaux, Cecile S.

    2016-01-01

    The importance of including directional and spectral light in simulations of ocean radiative transfer was investigated using a coupled biogeochemical-circulation-radiative model of the global oceans. The effort focused on phytoplankton abundances, nutrient concentrations and vertically-integrated net primary production. The importance was approached by sequentially removing directional (i.e., direct vs. diffuse) and spectral irradiance and comparing results of the above variables to a fully directionally and spectrally-resolved model. In each case the total irradiance was kept constant; it was only the pathways and spectral nature that were changed. Assuming all irradiance was diffuse had negligible effect on global ocean primary production. Global nitrate and total chlorophyll concentrations declined by about 20% each. The largest changes occurred in the tropics and sub-tropics rather than the high latitudes, where most of the irradiance is already diffuse. Disregarding spectral irradiance had effects that depended upon the choice of attenuation wavelength. The wavelength closest to the spectrally-resolved model, 500 nm, produced lower nitrate (19%) and chlorophyll (8%) and higher primary production (2%) than the spectral model. Phytoplankton relative abundances were very sensitive to the choice of non-spectral wavelength transmittance. The combined effects of neglecting both directional and spectral irradiance exacerbated the differences, despite using attenuation at 500 nm. Global nitrate decreased 33% and chlorophyll decreased 24%. Changes in phytoplankton community structure were considerable, representing a change from chlorophytes to cyanobacteria and coccolithophores. This suggested a shift in community function, from light-limitation to nutrient limitation: lower demands for nutrients from cyanobacteria and coccolithophores favored them over the more nutrient-demanding chlorophytes. Although diatoms have the highest nutrient demands in the model, their relative abundances were generally unaffected because they only prosper in nutrient-rich regions, such as the high latitudes and upwelling regions, which showed the fewest effects from the changes in radiative simulations. The results showed that including directional and spectral irradiance when simulating the ocean light field can be important for ocean biology, but the magnitude varies with variables and regions. The quantitative results are intended to assist ocean modelers when considering improved irradiance representations relative to other processes or variables associated with the issues of interest.

  3. Sequential interactions-in which one player plays first and another responds-promote cooperation in evolutionary-dynamical simulations of single-shot Prisoner's Dilemma and Snowdrift games.

    PubMed

    Laird, Robert A

    2018-09-07

    Cooperation is a central topic in evolutionary biology because (a) it is difficult to reconcile why individuals would act in a way that benefits others if such action is costly to themselves, and (b) it underpins many of the 'major transitions of evolution', making it essential for explaining the origins of successively higher levels of biological organization. Within evolutionary game theory, the Prisoner's Dilemma and Snowdrift games are the main theoretical constructs used to study the evolution of cooperation in dyadic interactions. In single-shot versions of these games, wherein individuals play each other only once, players typically act simultaneously rather than sequentially. Allowing one player to respond to the actions of its co-player-in the absence of any possibility of the responder being rewarded for cooperation or punished for defection, as in simultaneous or sequential iterated games-may seem to invite more incentive for exploitation and retaliation in single-shot games, compared to when interactions occur simultaneously, thereby reducing the likelihood that cooperative strategies can thrive. To the contrary, I use lattice-based, evolutionary-dynamical simulation models of single-shot games to demonstrate that under many conditions, sequential interactions have the potential to enhance unilaterally or mutually cooperative outcomes and increase the average payoff of populations, relative to simultaneous interactions-benefits that are especially prevalent in a spatially explicit context. This surprising result is attributable to the presence of conditional strategies that emerge in sequential games that can't occur in the corresponding simultaneous versions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Developing and Implementing a Framework of Participatory Simulation for Mobile Learning Using Scaffolding

    ERIC Educational Resources Information Center

    Yin, Chengjiu; Song, Yanjie; Tabata, Yoshiyuki; Ogata, Hiroaki; Hwang, Gwo-Jen

    2013-01-01

    This paper proposes a conceptual framework, scaffolding participatory simulation for mobile learning (SPSML), used on mobile devices for helping students learn conceptual knowledge in the classroom. As the pedagogical design, the framework adopts an experiential learning model, which consists of five sequential but cyclic steps: the initial stage,…

  5. Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping

    NASA Technical Reports Server (NTRS)

    Fujita, M.; Ulaby, F. (Principal Investigator)

    1982-01-01

    The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.

  6. Using sequential self-calibration method to identify conductivity distribution: Conditioning on tracer test data

    USGS Publications Warehouse

    Hu, B.X.; He, C.

    2008-01-01

    An iterative inverse method, the sequential self-calibration method, is developed for mapping spatial distribution of a hydraulic conductivity field by conditioning on nonreactive tracer breakthrough curves. A streamline-based, semi-analytical simulator is adopted to simulate solute transport in a heterogeneous aquifer. The simulation is used as the forward modeling step. In this study, the hydraulic conductivity is assumed to be a deterministic or random variable. Within the framework of the streamline-based simulator, the efficient semi-analytical method is used to calculate sensitivity coefficients of the solute concentration with respect to the hydraulic conductivity variation. The calculated sensitivities account for spatial correlations between the solute concentration and parameters. The performance of the inverse method is assessed by two synthetic tracer tests conducted in an aquifer with a distinct spatial pattern of heterogeneity. The study results indicate that the developed iterative inverse method is able to identify and reproduce the large-scale heterogeneity pattern of the aquifer given appropriate observation wells in these synthetic cases. ?? International Association for Mathematical Geology 2008.

  7. A Prospective Sequential Analysis of the Relation between Physical Aggression and Peer Rejection Acts in a High-Risk Preschool Sample

    ERIC Educational Resources Information Center

    Chen, Chin-Chih; McComas, Jennifer J.; Hartman, Ellie; Symons, Frank J.

    2011-01-01

    Research Findings: In early childhood education, the social ecology of the child is considered critical for healthy behavioral development. There is, however, relatively little information based on directly observing what children do that describes the moment-by-moment (i.e., sequential) relation between physical aggression and peer rejection acts…

  8. Article and method of forming an article

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lacy, Benjamin Paul; Kottilingam, Srikanth Chandrudu; Dutta, Sandip

    Provided are an article and a method of forming an article. The method includes providing a metallic powder, heating the metallic powder to a temperature sufficient to joint at least a portion of the metallic powder to form an initial layer, sequentially forming additional layers in a build direction by providing a distributed layer of the metallic powder over the initial layer and heating the distributed layer of the metallic powder, repeating the steps of sequentially forming the additional layers in the build direction to form a portion of the article having a hollow space formed in the build direction,more » and forming an overhang feature extending into the hollow space. The article includes an article formed by the method described herein.« less

  9. Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmalz, Mark S

    2011-07-24

    Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less

  10. Sequential accelerated tests: Improving the correlation of accelerated tests to module performance in the field

    NASA Astrophysics Data System (ADS)

    Felder, Thomas; Gambogi, William; Stika, Katherine; Yu, Bao-Ling; Bradley, Alex; Hu, Hongjie; Garreau-Iles, Lucie; Trout, T. John

    2016-09-01

    DuPont has been working steadily to develop accelerated backsheet tests that correlate with solar panels observations in the field. This report updates efforts in sequential testing. Single exposure tests are more commonly used and can be completed more quickly, and certain tests provide helpful predictions of certain backsheet failure modes. DuPont recommendations for single exposure tests are based on 25-year exposure levels for UV and humidity/temperature, and form a good basis for sequential test development. We recommend a sequential exposure of damp heat followed by UV then repetitions of thermal cycling and UVA. This sequence preserves 25-year exposure levels for humidity/temperature and UV, and correlates well with a large body of field observations. Measurements can be taken at intervals in the test, although the full test runs 10 months. A second, shorter sequential test based on damp heat and thermal cycling tests mechanical durability and correlates with loss of mechanical properties seen in the field. Ongoing work is directed toward shorter sequential tests that preserve good correlation to field data.

  11. Numerical simulation of double‐diffusive finger convection

    USGS Publications Warehouse

    Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard

    2005-01-01

    A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.

  12. Comparison of rate one-half, equivalent constraint length 24, binary convolutional codes for use with sequential decoding on the deep-space channel

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.

  13. Acceleration of discrete stochastic biochemical simulation using GPGPU.

    PubMed

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.

  14. Acceleration of discrete stochastic biochemical simulation using GPGPU

    PubMed Central

    Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira

    2015-01-01

    For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936

  15. A predictive model of asymmetric morphogenesis from 3D reconstructions of mouse heart looping dynamics

    PubMed Central

    Le Garrec, Jean-François; Ivanovitch, Kenzo D; Raphaël, Etienne; Bangham, J Andrew; Torres, Miguel; Coen, Enrico; Mohun, Timothy J

    2017-01-01

    How left-right patterning drives asymmetric morphogenesis is unclear. Here, we have quantified shape changes during mouse heart looping, from 3D reconstructions by HREM. In combination with cell labelling and computer simulations, we propose a novel model of heart looping. Buckling, when the cardiac tube grows between fixed poles, is modulated by the progressive breakdown of the dorsal mesocardium. We have identified sequential left-right asymmetries at the poles, which bias the buckling in opposite directions, thus leading to a helical shape. Our predictive model is useful to explore the parameter space generating shape variations. The role of the dorsal mesocardium was validated in Shh-/- mutants, which recapitulate heart shape changes expected from a persistent dorsal mesocardium. Our computer and quantitative tools provide novel insight into the mechanism of heart looping and the contribution of different factors, beyond the simple description of looping direction. This is relevant to congenital heart defects. PMID:29179813

  16. Comparative efficacy of simultaneous versus sequential multiple health behavior change interventions among adults: A systematic review of randomised trials.

    PubMed

    James, Erica; Freund, Megan; Booth, Angela; Duncan, Mitch J; Johnson, Natalie; Short, Camille E; Wolfenden, Luke; Stacey, Fiona G; Kay-Lambkin, Frances; Vandelanotte, Corneel

    2016-08-01

    Growing evidence points to the benefits of addressing multiple health behaviors rather than single behaviors. This review evaluates the relative effectiveness of simultaneous and sequentially delivered multiple health behavior change (MHBC) interventions. Secondary aims were to identify: a) the most effective spacing of sequentially delivered components; b) differences in efficacy of MHBC interventions for adoption/cessation behaviors and lifestyle/addictive behaviors, and; c) differences in trial retention between simultaneously and sequentially delivered interventions. MHBC intervention trials published up to October 2015 were identified through a systematic search. Eligible trials were randomised controlled trials that directly compared simultaneous and sequential delivery of a MHBC intervention. A narrative synthesis was undertaken. Six trials met the inclusion criteria and across these trials the behaviors targeted were smoking, diet, physical activity, and alcohol consumption. Three trials reported a difference in intervention effect between a sequential and simultaneous approach in at least one behavioral outcome. Of these, two trials favoured a sequential approach on smoking. One trial favoured a simultaneous approach on fat intake. There was no difference in retention between sequential and simultaneous approaches. There is limited evidence regarding the relative effectiveness of sequential and simultaneous approaches. Given only three of the six trials observed a difference in intervention effectiveness for one health behavior outcome, and the relatively consistent finding that the sequential and simultaneous approaches were more effective than a usual/minimal care control condition, it appears that both approaches should be considered equally efficacious. PROSPERO registration number: CRD42015027876. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Dissociating hippocampal and striatal contributions to sequential prediction learning

    PubMed Central

    Bornstein, Aaron M.; Daw, Nathaniel D.

    2011-01-01

    Behavior may be generated on the basis of many different kinds of learned contingencies. For instance, responses could be guided by the direct association between a stimulus and response, or by sequential stimulus-stimulus relationships (as in model-based reinforcement learning or goal-directed actions). However, the neural architecture underlying sequential predictive learning is not well-understood, in part because it is difficult to isolate its effect on choice behavior. To track such learning more directly, we examined reaction times (RTs) in a probabilistic sequential picture identification task. We used computational learning models to isolate trial-by-trial effects of two distinct learning processes in behavior, and used these as signatures to analyze the separate neural substrates of each process. RTs were best explained via the combination of two delta rule learning processes with different learning rates. To examine neural manifestations of these learning processes, we used functional magnetic resonance imaging to seek correlates of timeseries related to expectancy or surprise. We observed such correlates in two regions, hippocampus and striatum. By estimating the learning rates best explaining each signal, we verified that they were uniquely associated with one of the two distinct processes identified behaviorally. These differential correlates suggest that complementary anticipatory functions drive each region's effect on behavior. Our results provide novel insights as to the quantitative computational distinctions between medial temporal and basal ganglia learning networks and enable experiments that exploit trial-by-trial measurement of the unique contributions of both hippocampus and striatum to response behavior. PMID:22487032

  18. Accounting for aquifer heterogeneity from geological data to management tools.

    PubMed

    Blouin, Martin; Martel, Richard; Gloaguen, Erwan

    2013-01-01

    A nested workflow of multiple-point geostatistics (MPG) and sequential Gaussian simulation (SGS) was tested on a study area of 6 km(2) located about 20 km northwest of Quebec City, Canada. In order to assess its geological and hydrogeological parameter heterogeneity and to provide tools to evaluate uncertainties in aquifer management, direct and indirect field measurements are used as inputs in the geostatistical simulations to reproduce large and small-scale heterogeneities. To do so, the lithological information is first associated to equivalent hydrogeological facies (hydrofacies) according to hydraulic properties measured at several wells. Then, heterogeneous hydrofacies (HF) realizations are generated using a prior geological model as training image (TI) with the MPG algorithm. The hydraulic conductivity (K) heterogeneity modeling within each HF is finally computed using SGS algorithm. Different K models are integrated in a finite-element hydrogeological model to calculate multiple transport simulations. Different scenarios exhibit variations in mass transport path and dispersion associated with the large- and small-scale heterogeneity respectively. Three-dimensional maps showing the probability of overpassing different thresholds are presented as examples of management tools. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  19. THC-MP: High performance numerical simulation of reactive transport and multiphase flow in porous media

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohui; Li, Weishan; Tian, Hailong; Li, Hongliang; Xu, Haixiao; Xu, Tianfu

    2015-07-01

    The numerical simulation of multiphase flow and reactive transport in the porous media on complex subsurface problem is a computationally intensive application. To meet the increasingly computational requirements, this paper presents a parallel computing method and architecture. Derived from TOUGHREACT that is a well-established code for simulating subsurface multi-phase flow and reactive transport problems, we developed a high performance computing THC-MP based on massive parallel computer, which extends greatly on the computational capability for the original code. The domain decomposition method was applied to the coupled numerical computing procedure in the THC-MP. We designed the distributed data structure, implemented the data initialization and exchange between the computing nodes and the core solving module using the hybrid parallel iterative and direct solver. Numerical accuracy of the THC-MP was verified through a CO2 injection-induced reactive transport problem by comparing the results obtained from the parallel computing and sequential computing (original code). Execution efficiency and code scalability were examined through field scale carbon sequestration applications on the multicore cluster. The results demonstrate successfully the enhanced performance using the THC-MP on parallel computing facilities.

  20. An integrated simulator of structure and anisotropic flow in gas diffusion layers with hydrophobic additives

    NASA Astrophysics Data System (ADS)

    Burganos, Vasilis N.; Skouras, Eugene D.; Kalarakis, Alexandros N.

    2017-10-01

    The lattice-Boltzmann (LB) method is used in this work to reproduce the controlled addition of binder and hydrophobicity-promoting agents, like polytetrafluoroethylene (PTFE), into gas diffusion layers (GDLs) and to predict flow permeabilities in the through- and in-plane directions. The present simulator manages to reproduce spreading of binder and hydrophobic additives, sequentially, into the neat fibrous layer using a two-phase flow model. Gas flow simulation is achieved by the same code, sidestepping the need for a post-processing flow code and avoiding the usual input/output and data interface problems that arise in other techniques. Compression effects on flow anisotropy of the impregnated GDL are also studied. The permeability predictions for different compression levels and for different binder or PTFE loadings are found to compare well with experimental data for commercial GDL products and with computational fluid dynamics (CFD) predictions. Alternatively, the PTFE-impregnated structure is reproduced from Scanning Electron Microscopy (SEM) images using an independent, purely geometrical approach. A comparison of the two approaches is made regarding their adequacy to reproduce correctly the main structural features of the GDL and to predict anisotropic flow permeabilities at different volume fractions of binder and hydrophobic additives.

  1. Some theoretical issues on computer simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.L.; Reidys, C.M.

    1998-02-01

    The subject of this paper is the development of mathematical foundations for a theory of simulation. Sequentially updated cellular automata (sCA) over arbitrary graphs are employed as a paradigmatic framework. In the development of the theory, the authors focus on the properties of causal dependencies among local mappings in a simulation. The main object of and study is the mapping between a graph representing the dependencies among entities of a simulation and a representing the equivalence classes of systems obtained by all possible updates.

  2. Enantioselective Cobalt-Catalyzed Sequential Nazarov Cyclization/Electrophilic Fluorination: Access to Chiral α-Fluorocyclopentenones.

    PubMed

    Zhang, Heyi; Cheng, Biao; Lu, Zhan

    2018-06-20

    A newly designed thiazoline iminopyridine ligand for enantioselective cobalt-catalyzed sequential Nazarov cyclization/electrophilic fluorination was developed. Various chiral α-fluorocyclopentenones were prepared with good yields and diastereo- and enantioselectivities. Further derivatizations could be easily carried out to provide chiral cyclopentenols with three contiguous stereocenters. Furthermore, a direct deesterification of fluorinated products could afford chiral α-single fluorine-substituted cyclopentenones.

  3. Sequential detection of learning in cognitive diagnosis.

    PubMed

    Ye, Sangbeak; Fellouris, Georgios; Culpepper, Steven; Douglas, Jeff

    2016-05-01

    In order to look more closely at the many particular skills examinees utilize to answer items, cognitive diagnosis models have received much attention, and perhaps are preferable to item response models that ordinarily involve just one or a few broadly defined skills, when the objective is to hasten learning. If these fine-grained skills can be identified, a sharpened focus on learning and remediation can be achieved. The focus here is on how to detect when learning has taken place for a particular attribute and efficiently guide a student through a sequence of items to ultimately attain mastery of all attributes while administering as few items as possible. This can be seen as a problem in sequential change-point detection for which there is a long history and a well-developed literature. Though some ad hoc rules for determining learning may be used, such as stopping after M consecutive items have been successfully answered, more efficient methods that are optimal under various conditions are available. The CUSUM, Shiryaev-Roberts and Shiryaev procedures can dramatically reduce the time required to detect learning while maintaining rigorous Type I error control, and they are studied in this context through simulation. Future directions for modelling and detection of learning are discussed. © 2016 The British Psychological Society.

  4. Blocking for Sequential Political Experiments

    PubMed Central

    Moore, Sally A.

    2013-01-01

    In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061

  5. On the origin of reproducible sequential activity in neural circuits

    NASA Astrophysics Data System (ADS)

    Afraimovich, V. S.; Zhigulin, V. P.; Rabinovich, M. I.

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  6. On the origin of reproducible sequential activity in neural circuits.

    PubMed

    Afraimovich, V S; Zhigulin, V P; Rabinovich, M I

    2004-12-01

    Robustness and reproducibility of sequential spatio-temporal responses is an essential feature of many neural circuits in sensory and motor systems of animals. The most common mathematical images of dynamical regimes in neural systems are fixed points, limit cycles, chaotic attractors, and continuous attractors (attractive manifolds of neutrally stable fixed points). These are not suitable for the description of reproducible transient sequential neural dynamics. In this paper we present the concept of a stable heteroclinic sequence (SHS), which is not an attractor. SHS opens the way for understanding and modeling of transient sequential activity in neural circuits. We show that this new mathematical object can be used to describe robust and reproducible sequential neural dynamics. Using the framework of a generalized high-dimensional Lotka-Volterra model, that describes the dynamics of firing rates in an inhibitory network, we present analytical results on the existence of the SHS in the phase space of the network. With the help of numerical simulations we confirm its robustness in presence of noise in spite of the transient nature of the corresponding trajectories. Finally, by referring to several recent neurobiological experiments, we discuss possible applications of this new concept to several problems in neuroscience.

  7. Decay modes of the Hoyle state in 12C

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.

    2018-04-01

    Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.

  8. 3D hybrid tectono-stochastic modeling of naturally fractured reservoir: Application of finite element method and stochastic simulation technique

    NASA Astrophysics Data System (ADS)

    Gholizadeh Doonechaly, N.; Rahman, S. S.

    2012-05-01

    Simulation of naturally fractured reservoirs offers significant challenges due to the lack of a methodology that can utilize field data. To date several methods have been proposed by authors to characterize naturally fractured reservoirs. Among them is the unfolding/folding method which offers some degree of accuracy in estimating the probability of the existence of fractures in a reservoir. Also there are statistical approaches which integrate all levels of field data to simulate the fracture network. This approach, however, is dependent on the availability of data sources, such as seismic attributes, core descriptions, well logs, etc. which often make it difficult to obtain field wide. In this study a hybrid tectono-stochastic simulation is proposed to characterize a naturally fractured reservoir. A finite element based model is used to simulate the tectonic event of folding and unfolding of a geological structure. A nested neuro-stochastic technique is used to develop the inter-relationship between the data and at the same time it utilizes the sequential Gaussian approach to analyze field data along with fracture probability data. This approach has the ability to overcome commonly experienced discontinuity of the data in both horizontal and vertical directions. This hybrid technique is used to generate a discrete fracture network of a specific Australian gas reservoir, Palm Valley in the Northern Territory. Results of this study have significant benefit in accurately describing fluid flow simulation and well placement for maximal hydrocarbon recovery.

  9. Molecular dynamics studies of InGaN growth on nonpolar (11 2 \\xAF0 ) GaN surfaces

    NASA Astrophysics Data System (ADS)

    Chu, K.; Gruber, J.; Zhou, X. W.; Jones, R. E.; Lee, S. R.; Tucker, G. J.

    2018-01-01

    We have performed direct molecular dynamics (MD) simulations of heteroepitaxial vapor deposition of I nxG a1 -xN films on nonpolar (11 2 ¯0 ) wurtzite-GaN surfaces to investigate strain relaxation by misfit-dislocation formation. The simulated growth is conducted on an atypically large scale by sequentially injecting nearly a million individual vapor-phase atoms towards a fixed GaN substrate. We apply time-and-position-dependent boundary constraints to affect the appropriate environments for the vapor phase, the near-surface solid phase, and the bulklike regions of the growing layer. The simulations employ a newly optimized Stillinger-Weber In-Ga-N system interatomic potential wherein multiple binary and ternary structures are included in the underlying density-functional theory and experimental training sets to improve the treatment of the In-Ga-N related interactions. To examine the effect of growth conditions, we study a matrix of 63 different MD-growth simulations spanning seven I nxG a1 -xN -alloy compositions ranging from x =0.0 to x =0.8 and nine growth temperatures above half the simulated melt temperature. We found a composition dependent temperature range where all kinetically trapped defects were eliminated, leaving only quasiequilibrium misfit and threading dislocations present in the simulated films. Based on the MD results obtained in this temperature range, we observe the formation of interfacial misfit and threading dislocation arrays with morphologies strikingly close to those seen in experiments. In addition, we compare the MD-observed thickness-dependent onset of misfit-dislocation formation to continuum-elasticity-theory models of the critical thickness and find reasonably good agreement. Finally, we use the three-dimensional atomistic details uniquely available in the MD-growth histories to directly observe the nucleation of dislocations at surface pits in the evolving free surface.

  10. Simulation of radiation damage in minerals by sequential ion irradiations

    NASA Astrophysics Data System (ADS)

    Nakasuga, W. M.; Li, W.; Ewing, R. C.

    2015-12-01

    Radiation effects due to α-decay of U and Th and spontaneous fission of 238U control the production and recovery of the radiation-induced structure of minerals, as well as the diffusion of elements through the mineral host. However, details of how the damage microstructure is produced and annealed remain unknown. Our recent ion beam experiments demonstrate that ionizing radiation from the α-particle recovers the damage structure. Thus, the damage structure is not only the result of the thermal hisotry of the sample, but also of the complex interaction between ionizing and ballistic damage mechanisms. By combining ion irradiations with transmission electron microscopy (TEM), we have simulated the damage produced by α-decay and fission. The α-particle induced annealing has been simulated by in situ TEM observation of consecutive ion-irradiations: i.) 1 MeV Kr2+ (simulating 70 keV α-recoils induced damage), ii.) followed by 400 keV He+ (simulating 4.5 MeV α-particle induced annealing). Thus, in addition to the well-established effects of thermal annealing, the α-particle annealing effects, as evidenced by partical recrystallization of the originally, fully-amorphous apatite upon the α-particle irriadations, should also be considered when evaluating diffusion and release of elements, such as He. In addition, the fission track annealing has been simulated by a new sample preparation method that allows for direct observation of radiation damage recovery at each point along the length of latent tracks created by 80 MeV Xe ions (a typical fission fragment). The initial, rapid reduction in etched track length during isothermal annealing is explained by the rapid annealing of those sections of the track with smaller diameters, as observed directly by in situ TEM. In summary, the atomic-scale investigation of radiation damage in minerals is critical to understanding of the influence of raidation damage on diffusion and kinetics that are fundamental to geochronology.

  11. Different propagation speeds of recalled sequences in plastic spiking neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.

  12. Domain-general neural correlates of dependency formation: Using complex tones to simulate language.

    PubMed

    Brilmayer, Ingmar; Sassenhagen, Jona; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias

    2017-08-01

    There is an ongoing debate whether the P600 event-related potential component following syntactic anomalies reflects syntactic processes per se, or if it is an instance of the P300, a domain-general ERP component associated with attention and cognitive reorientation. A direct comparison of both components is challenging because of the huge discrepancy in experimental designs and stimulus choice between language and 'classic' P300 experiments. In the present study, we develop a new approach to mimic the interplay of sequential position as well as categorical and relational information in natural language syntax (word category and agreement) in a non-linguistic target detection paradigm using musical instruments. Participants were instructed to (covertly) detect target tones which were defined by instrument change and pitch rise between subsequent tones at the last two positions of four-tone sequences. We analysed the EEG using event-related averaging and time-frequency decomposition. Our results show striking similarities to results obtained from linguistic experiments. We found a P300 that showed sensitivity to sequential position and a late positivity sensitive to stimulus type and position. A time-frequency decomposition revealed significant effects of sequential position on the theta band and a significant influence of stimulus type on the delta band. Our results suggest that the detection of non-linguistic targets defined via complex feature conjunctions in the present study and the detection of syntactic anomalies share the same underlying processes: attentional shift and memory based matching processes that act upon multi-feature conjunctions. We discuss the results as supporting domain-general accounts of the P600 during natural language comprehension. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Sequential processing of GNSS-R delay-Doppler maps (DDM's) for ocean wind retrieval

    NASA Astrophysics Data System (ADS)

    Garrison, J. L.; Rodriguez-Alvarez, N.; Hoffman, R.; Annane, B.; Leidner, M.; Kaitie, S.

    2016-12-01

    The delay-Doppler map (DDM) is the fundamental data product from GNSS-Reflectometry (GNSS-R), generated by cross-correlating the scattered signal with a local signal model over a range of delays and Doppler frequencies. Delay and Doppler form a set of coordinates on the ocean surface and the shape of the DDM is related to the distribution of ocean slopes. Wind speed can thus be estimated by fitting a scattering model to the shape of the observed DDM or defining an observable (e.g. average power or leading edge slope) which characterizes the change in DDM shape. For spaceborne measurements, the DDM is composed of signals scattered from a glistening zone, which can extend for up to 100 km or more. Setting a reasonable resolution requirement (25 km or less) will limit the usable portion of the DDM at each observation to only a small region near the specular point. Cyclone-GNSS (CYGNSS) is a NASA mission to study developing tropical cyclones using GNSS-R. CYGNSS science requirements call for wind retrieval with an accuracy of 10 percent above 20 m/s within a 25 km resolution. This requirement can be met using an observable defined for DDM samples between +/- 0.25 chips in delay and +/- 1 kHz in Doppler, with some filtering of the observations using a minimum threshold for range corrected gain (RCG). An improved approach, to be reviewed in this presentation, sequentially processes multiple DDM's, to combine observations generated from different "looks" at the same points on the surface. Applying this sequential process to synthetic data indicates a significant improvement in wind retrieval accuracy over a 10 km grid covering a region around the specular point. The attached figure illustrates this improvement, using simulated CYGNSS DDM's generated using the wind fields from hurricanes Earl and Danielle (left). The middle plots show wind retrievals using only an observable defined within the 25 km resolution cell. The plots on the right side show the retrievals from sequential processing of multiple DDM's. Recently, the assimilation of GNSS-R retrievals into weather forecast models has been studied. The authors have begun to investigate the direct assimilation of other data products, such as the DDM itself, or the results of sequential processing.

  14. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    PubMed

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  15. Super-resolution imaging using multi- electrode CMUTs: theoretical design and simulation using point targets.

    PubMed

    You, Wei; Cretu, Edmond; Rohling, Robert

    2013-11-01

    This paper investigates a low computational cost, super-resolution ultrasound imaging method that leverages the asymmetric vibration mode of CMUTs. Instead of focusing on the broadband received signal on the entire CMUT membrane, we utilize the differential signal received on the left and right part of the membrane obtained by a multi-electrode CMUT structure. The differential signal reflects the asymmetric vibration mode of the CMUT cell excited by the nonuniform acoustic pressure field impinging on the membrane, and has a resonant component in immersion. To improve the resolution, we propose an imaging method as follows: a set of manifold matrices of CMUT responses for multiple focal directions are constructed off-line with a grid of hypothetical point targets. During the subsequent imaging process, the array sequentially steers to multiple angles, and the amplitudes (weights) of all hypothetical targets at each angle are estimated in a maximum a posteriori (MAP) process with the manifold matrix corresponding to that angle. Then, the weight vector undergoes a directional pruning process to remove the false estimation at other angles caused by the side lobe energy. Ultrasound imaging simulation is performed on ring and linear arrays with a simulation program adapted with a multi-electrode CMUT structure capable of obtaining both average and differential received signals. Because the differential signals from all receiving channels form a more distinctive temporal pattern than the average signals, better MAP estimation results are expected than using the average signals. The imaging simulation shows that using differential signals alone or in combination with the average signals produces better lateral resolution than the traditional phased array or using the average signals alone. This study is an exploration into the potential benefits of asymmetric CMUT responses for super-resolution imaging.

  16. The BUMP model of response planning: intermittent predictive control accounts for 10 Hz physiological tremor.

    PubMed

    Bye, Robin T; Neilson, Peter D

    2010-10-01

    Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. A high accuracy sequential solver for simulation and active control of a longitudinal combustion instability

    NASA Technical Reports Server (NTRS)

    Shyy, W.; Thakur, S.; Udaykumar, H. S.

    1993-01-01

    A high accuracy convection scheme using a sequential solution technique has been developed and applied to simulate the longitudinal combustion instability and its active control. The scheme has been devised in the spirit of the Total Variation Diminishing (TVD) concept with special source term treatment. Due to the substantial heat release effect, a clear delineation of the key elements employed by the scheme, i.e., the adjustable damping factor and the source term treatment has been made. By comparing with the first-order upwind scheme previously utilized, the present results exhibit less damping and are free from spurious oscillations, offering improved quantitative accuracy while confirming the spectral analysis reported earlier. A simple feedback type of active control has been found to be capable of enhancing or attenuating the magnitude of the combustion instability.

  18. Field-scale multi-phase LNAPL remediation: Validating a new computational framework against sequential field pilot trials.

    PubMed

    Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B

    2018-03-05

    Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.

  19. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  20. Parallel discrete event simulation using shared memory

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1988-01-01

    With traditional event-list techniques, evaluating a detailed discrete-event simulation-model can often require hours or even days of computation time. By eliminating the event list and maintaining only sufficient synchronization to ensure causality, parallel simulation can potentially provide speedups that are linear in the numbers of processors. A set of shared-memory experiments, using the Chandy-Misra distributed-simulation algorithm, to simulate networks of queues is presented. Parameters of the study include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential-simulation of most queueing network models.

  1. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks

    PubMed Central

    Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi

    2017-01-01

    In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106

  2. Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.

    PubMed

    Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi

    2017-01-30

    In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.

  3. A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.

    PubMed

    Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L

    2016-03-01

    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.

  4. Food matrix effects on in vitro digestion of microencapsulated tuna oil powder.

    PubMed

    Shen, Zhiping; Apriani, Christina; Weerakkody, Rangika; Sanguansri, Luz; Augustin, Mary Ann

    2011-08-10

    Tuna oil, containing 53 mg of eicosapentaenoic acid (EPA) and 241 mg of docosahexaenoic acid (DHA) per gram of oil, delivered as a neat microencapsulated tuna oil powder (25% oil loading) or in food matrices (orange juice, yogurt, or cereal bar) fortified with microencapsulated tuna oil powder was digested in simulated gastric fluid or sequentially in simulated gastric fluid and simulated intestinal fluid. The level of fortification was equivalent to 1 g of tuna oil per recommended serving size (i.e., per 200 g of orange juice or yogurt or 60 g of cereal bar). The changes in particle size of oil droplets during digestion were influenced by the method of delivery of the microencapsulated tuna oil powder. Lipolysis in simulated gastric fluid was low, with only 4.4-6.1% EPA and ≤1.5% DHA released after digestion (as a % of total fatty acids present). After sequential exposure to simulated gastric and intestinal fluids, much higher extents of lipolysis of both glycerol-bound EPA and DHA were obtained (73.2-78.6% for the neat powder, fortified orange juice, and yogurt; 60.3-64.0% for the fortified cereal bar). This research demonstrates that the choice of food matrix may influence the lipolysis of microencapsulated tuna oil.

  5. Effects of sequential and discrete rapid naming on reading in Japanese children with reading difficulty.

    PubMed

    Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi

    2011-06-01

    To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  6. Disentangling beat perception from sequential learning and examining the influence of attention and musical abilities on ERP responses to rhythm.

    PubMed

    Bouwer, Fleur L; Werner, Carola M; Knetemann, Myrthe; Honing, Henkjan

    2016-05-01

    Beat perception is the ability to perceive temporal regularity in musical rhythm. When a beat is perceived, predictions about upcoming events can be generated. These predictions can influence processing of subsequent rhythmic events. However, statistical learning of the order of sounds in a sequence can also affect processing of rhythmic events and must be differentiated from beat perception. In the current study, using EEG, we examined the effects of attention and musical abilities on beat perception. To ensure we measured beat perception and not absolute perception of temporal intervals, we used alternating loud and soft tones to create a rhythm with two hierarchical metrical levels. To control for sequential learning of the order of the different sounds, we used temporally regular (isochronous) and jittered rhythmic sequences. The order of sounds was identical in both conditions, but only the regular condition allowed for the perception of a beat. Unexpected intensity decrements were introduced on the beat and offbeat. In the regular condition, both beat perception and sequential learning were expected to enhance detection of these deviants on the beat. In the jittered condition, only sequential learning was expected to affect processing of the deviants. ERP responses to deviants were larger on the beat than offbeat in both conditions. Importantly, this difference was larger in the regular condition than in the jittered condition, suggesting that beat perception influenced responses to rhythmic events in addition to sequential learning. The influence of beat perception was present both with and without attention directed at the rhythm. Moreover, beat perception as measured with ERPs correlated with musical abilities, but only when attention was directed at the stimuli. Our study shows that beat perception is possible when attention is not directed at a rhythm. In addition, our results suggest that attention may mediate the influence of musical abilities on beat perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  7. Probing finite coarse-grained virtual Feynman histories with sequential weak values

    NASA Astrophysics Data System (ADS)

    Georgiev, Danko; Cohen, Eliahu

    2018-05-01

    Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.

  8. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  9. 3D fold growth rates in transpressional tectonic settings

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel

    2015-04-01

    Geological folds are inherently three-dimensional (3D) structures; hence, they also grow in 3D. In this study, fold growth in all three dimensions is quantified numerically using a finite-element algorithm for simulating deformation of Newtonian media in 3D. The presented study is an extension and generalization of the work presented in Frehner (2014), which only considered unidirectional layer-parallel compression. In contrast, the full range from strike slip settings (i.e., simple shear) to unidirectional layer-parallel compression is considered here by varying the convergence angle of the boundary conditions; hence the results are applicable to general transpressional tectonic settings. Only upright symmetrical single-layer fold structures are considered. The horizontal higher-viscous layer exhibits an initial point-like perturbation. Due to the mixed pure- and simple shear boundary conditions a mechanical buckling instability grows from this perturbation in all three dimensions, described by: Fold amplification (vertical growth): Fold amplification describes the growth from a fold shape with low limb-dip angle to a shape with higher limb-dip angle. Fold elongation (growth parallel to fold axis): Fold elongation describes the growth from a dome-shaped (3D) structure to a more cylindrical fold (2D). Sequential fold growth (growth perpendicular to fold axial plane): Sequential fold growth describes the growth of secondary (and further) folds adjacent to the initial isolated fold. The term 'lateral fold growth' is used as an umbrella term for both fold elongation and sequential fold growth. In addition, the orientation of the fold axis is tracked as a function of the convergence angle. Even though the absolute values of all three growth rates are markedly reduced with increasing simple-shear component at the boundaries, the general pattern of the quantified fold growth under the studied general-shear boundary conditions is surprisingly similar to the end-member case of unidirectional layer-parallel compression (Frehner, 2014). Fold growth rates in the two lateral directions are almost identical resulting in bulk fold structures with aspect ratios in map view close to 1. Fold elongation is continuous with increasing bulk deformation, while sequential fold growth exhibits jumps whenever a new sequential fold appears. Compared with the two lateral growth directions, fold amplification exhibits a slightly higher growth rate. The orientation of the fold axis has an angle equal to 1 2 of 90° minus the convergence angle; and this orientation is stable with increasing bulk deformation, i.e. the fold axis does not rotate with increasing general-shear deformation. For example, for simple-shear boundary conditions (convergence angle 0°) the fold axis is stable at an angle of 45° to the boundaries; for a convergence angle of 45° the fold axis is stable at an angle of 22.5° to the boundaries. REFERENCE: Frehner M., 2014: 3D fold growth rates, Terra Nova 26, 417-424, doi:10.1111/ter.12116.

  10. GeNeDA: An Open-Source Workflow for Design Automation of Gene Regulatory Networks Inspired from Microelectronics.

    PubMed

    Madec, Morgan; Pecheux, François; Gendrault, Yves; Rosati, Elise; Lallement, Christophe; Haiech, Jacques

    2016-10-01

    The topic of this article is the development of an open-source automated design framework for synthetic biology, specifically for the design of artificial gene regulatory networks based on a digital approach. In opposition to other tools, GeNeDA is an open-source online software based on existing tools used in microelectronics that have proven their efficiency over the last 30 years. The complete framework is composed of a computation core directly adapted from an Electronic Design Automation tool, input and output interfaces, a library of elementary parts that can be achieved with gene regulatory networks, and an interface with an electrical circuit simulator. Each of these modules is an extension of microelectronics tools and concepts: ODIN II, ABC, the Verilog language, SPICE simulator, and SystemC-AMS. GeNeDA is first validated on a benchmark of several combinatorial circuits. The results highlight the importance of the part library. Then, this framework is used for the design of a sequential circuit including a biological state machine.

  11. Navigation strategy and filter design for solar electric missions

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Hagar, H., Jr.

    1972-01-01

    Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.

  12. First Observation of Three-Neutron Sequential Emission from 25O

    NASA Astrophysics Data System (ADS)

    Sword, C.; Brett, J.; Deyoung, P. A.; Frank, N.; Karrick, H.; Kuchera, A. N.; MoNA Collaboration

    2017-09-01

    An active area of nuclear physics research is to evaluate models of the nuclear force by studying the structure of neutron-rich isotopes. In this experiment, a 101.3 MeV/u 27Ne beam from the National Superconducting Cyclotron Laboratory collided with a liquid deuterium target. The collision resulted in two-proton removal from the 27Ne beam which created excited 25O that decayed into three neutrons and an 22O fragment. The neutrons were detected by arrays of scintillating plastic bars, while a 4-Tesla dipole magnet placed directly after the target redirected charged fragments to a series of charged-particle detectors. From measured velocities of the neutrons and 22O fragments, the decay energy of 25O was calculated on an event-by-event basis with invariant mass spectroscopy. Using GEANT4, we simulated the decay of all nuclei that could have been created by the beam collision. By successfully fitting simulated decay processes to experimental data, we determined the decay processes present in the experiment. This work is supported by the National Science Foundation under Grants No. PHY-1306074 and No. PHY-1613188.

  13. Preliminary results of sequential monitoring of simulated clandestine graves in Colombia, South America, using ground penetrating radar and botany.

    PubMed

    Molina, Carlos Martin; Pringle, Jamie K; Saumett, Miguel; Hernández, Orlando

    2015-03-01

    In most Latin American countries there are significant numbers of missing people and forced disappearances, 68,000 alone currently in Colombia. Successful detection of shallow buried human remains by forensic search teams is difficult in varying terrain and climates. This research has created three simulated clandestine burial styles at two different depths commonly encountered in Latin America to gain knowledge of optimum forensic geophysics detection techniques. Repeated monitoring of the graves post-burial was undertaken by ground penetrating radar. Radar survey 2D profile results show reasonable detection of ½ clothed pig cadavers up to 19 weeks of burial, with decreasing confidence after this time. Simulated burials using skeletonized human remains were not able to be imaged after 19 weeks of burial, with beheaded and burnt human remains not being able to be detected throughout the survey period. Horizontal radar time slices showed good early results up to 19 weeks of burial as more area was covered and bi-directional surveys were collected, but these decreased in amplitude over time. Deeper burials were all harder to image than shallower ones. Analysis of excavated soil found soil moisture content almost double compared to those reported from temperate climate studies. Vegetation variations over the simulated graves were also noted which would provide promising indicators for grave detection. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  14. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  15. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  16. Sequential Polarity-Reversing Circuit

    NASA Technical Reports Server (NTRS)

    Labaw, Clayton C.

    1994-01-01

    Proposed circuit reverses polarity of electric power supplied to bidirectional dc motor, reversible electro-mechanical actuator, or other device operating in direction depending on polarity. Circuit reverses polarity each time power turned on, without need for additional polarity-reversing or direction signals and circuitry to process them.

  17. Dispersion Analysis Using Particle Tracking Simulations Through Heterogeneity Based on Outcrop Lidar Imagery

    NASA Astrophysics Data System (ADS)

    Klise, K. A.; Weissmann, G. S.; McKenna, S. A.; Tidwell, V. C.; Frechette, J. D.; Wawrzyniec, T. F.

    2007-12-01

    Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  18. Decision Making in Kidney Paired Donation Programs with Altruistic Donors*

    PubMed Central

    Li, Yijiang; Song, Peter X.-K.; Leichtman, Alan B.; Rees, Michael A.; Kalbfleisch, John D.

    2014-01-01

    In recent years, kidney paired donation (KPD) has been extended to include living non-directed or altruistic donors, in which an altruistic donor donates to the candidate of an incompatible donor-candidate pair with the understanding that the donor in that pair will further donate to the candidate of a second pair, and so on; such a process continues and thus forms an altruistic donor-initiated chain. In this paper, we propose a novel strategy to sequentially allocate the altruistic donor (or bridge donor) so as to maximize the expected utility; analogous to the way a computer plays chess, the idea is to evaluate different allocations for each altruistic donor (or bridge donor) by looking several moves ahead in a derived look-ahead search tree. Simulation studies are provided to illustrate and evaluate our proposed method. PMID:25309603

  19. Design, Synthesis and Affinity Properties of Biologically Active Peptide and Protein Conjugates of Cotton Cellulose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, J. V.; Goheen, Steven C.

    The formation of peptide and protein conjugates of cellulose on cotton fabrics provides promising leads for the development of wound healing, antibacterial, and decontaminating textiles. An approach to the design, synthesis, and analysis of bioconjugates containing cellulose peptide and protein conjugates includes: 1) computer graphic modeling for a rationally designed structure; 2) attachment of the peptide or protein to cotton cellulose through a linker amino acid, and 3) characterization of the resulting bioconjugate. Computer graphic simulation of protein and peptide cellulose conjugates gives a rationally designed biopolymer to target synthetic modifications to the cotton cellulose. Techniques for preparing these typesmore » of conjugates involve both sequential assembly of the peptide on the fabric and direct crosslinking of the peptide or protein as cellulose bound esters or carboxymethylcellulose amides.« less

  20. Parallel discrete event simulation: A shared memory approach

    NASA Technical Reports Server (NTRS)

    Reed, Daniel A.; Malony, Allen D.; Mccredie, Bradley D.

    1987-01-01

    With traditional event list techniques, evaluating a detailed discrete event simulation model can often require hours or even days of computation time. Parallel simulation mimics the interacting servers and queues of a real system by assigning each simulated entity to a processor. By eliminating the event list and maintaining only sufficient synchronization to insure causality, parallel simulation can potentially provide speedups that are linear in the number of processors. A set of shared memory experiments is presented using the Chandy-Misra distributed simulation algorithm to simulate networks of queues. Parameters include queueing network topology and routing probabilities, number of processors, and assignment of network nodes to processors. These experiments show that Chandy-Misra distributed simulation is a questionable alternative to sequential simulation of most queueing network models.

  1. The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices

    PubMed Central

    An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei

    2014-01-01

    All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033

  2. Structural profiling of individual glycosphingolipids in a single thin-layer chromatogram by multiple sequential immunodetection matched with Direct IR-MALDI-o-TOF mass spectrometry.

    PubMed

    Souady, Jamal; Soltwisch, Jens; Dreisewerd, Klaus; Haier, Jörg; Peter-Katalinić, Jasna; Müthing, Johannes

    2009-11-15

    The thin-layer chromatography (TLC) immunoenzyme overlay assay is a widely used tool for antibody-mediated identification of glycosphingolipids (GSLs) in mixtures. However, because the majority of GSLs is left unexamined in a chromatogram of a single assay, we developed a novel method that permits detection of various GSLs by sequential multiple immunostaining combined with individual coloring of GSLs in the same chromatogram. Specific staining was achieved by means of primary anti-GSL antibodies, directed against lactosylceramide, globotriaosylceramide, and globotetraosylceramide, in conjunction with alkaline phosphatase (AP)- or horseradish peroxidase (HRP)-conjugated secondary antibodies together with the appropriate chromogenic substrates. Triple coloring with 5-bromo-4-chloro-3-indolyl phosphate (BCIP)-AP, Fast Red-AP, and 3,3'-diaminobenzidine (DAB)-HRP resulted in blue, red, and black precipitates, respectively, following three sequential immunostaining rounds. Structures of antibody-detected GSLs were determined by direct coupling of TLC with infrared matrix-assisted laser desorption/ionization orthogonal time-of-flight mass spectrometry. This combinatorial technique was used to demonstrate structural GSL profiling of crude lipid extracts from human hepatocellular cancer. This powerful technology allows efficient structural characterization of GSLs in small tissue samples and marks a further step forward in the emerging field of glycosphingolipidomics.

  3. Multi-objective design optimization of antenna structures using sequential domain patching with automated patch size determination

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2018-02-01

    In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.

  4. Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation

    NASA Astrophysics Data System (ADS)

    Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab

    2015-05-01

    3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.

  5. Sequential and simultaneous choices: testing the diet selection and sequential choice models.

    PubMed

    Freidin, Esteban; Aw, Justine; Kacelnik, Alex

    2009-03-01

    We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.

  6. Breast conserving treatment for breast cancer: dosimetric comparison of sequential versus simultaneous integrated photon boost.

    PubMed

    Van Parijs, Hilde; Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark

    2014-01-01

    Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.

  7. Breast Conserving Treatment for Breast Cancer: Dosimetric Comparison of Sequential versus Simultaneous Integrated Photon Boost

    PubMed Central

    Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark

    2014-01-01

    Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine. PMID:25162031

  8. Exploiting neurovascular coupling: a Bayesian sequential Monte Carlo approach applied to simulated EEG fNIRS data

    NASA Astrophysics Data System (ADS)

    Croce, Pierpaolo; Zappasodi, Filippo; Merla, Arcangelo; Chiarelli, Antonio Maria

    2017-08-01

    Objective. Electrical and hemodynamic brain activity are linked through the neurovascular coupling process and they can be simultaneously measured through integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS). Thanks to the lack of electro-optical interference, the two procedures can be easily combined and, whereas EEG provides electrophysiological information, fNIRS can provide measurements of two hemodynamic variables, such as oxygenated and deoxygenated hemoglobin. A Bayesian sequential Monte Carlo approach (particle filter, PF) was applied to simulated recordings of electrical and neurovascular mediated hemodynamic activity, and the advantages of a unified framework were shown. Approach. Multiple neural activities and hemodynamic responses were simulated in the primary motor cortex of a subject brain. EEG and fNIRS recordings were obtained by means of forward models of volume conduction and light propagation through the head. A state space model of combined EEG and fNIRS data was built and its dynamic evolution was estimated through a Bayesian sequential Monte Carlo approach (PF). Main results. We showed the feasibility of the procedure and the improvements in both electrical and hemodynamic brain activity reconstruction when using the PF on combined EEG and fNIRS measurements. Significance. The investigated procedure allows one to combine the information provided by the two methodologies, and, by taking advantage of a physical model of the coupling between electrical and hemodynamic response, to obtain a better estimate of brain activity evolution. Despite the high computational demand, application of such an approach to in vivo recordings could fully exploit the advantages of this combined brain imaging technology.

  9. Development of a software framework for data assimilation and its applications for streamflow forecasting in Japan

    NASA Astrophysics Data System (ADS)

    Noh, S. J.; Tachikawa, Y.; Shiiba, M.; Yorozu, K.; Kim, S.

    2012-04-01

    Data assimilation methods have received increased attention to accomplish uncertainty assessment and enhancement of forecasting capability in various areas. Despite of their potentials, applicable software frameworks to probabilistic approaches and data assimilation are still limited because the most of hydrologic modeling software are based on a deterministic approach. In this study, we developed a hydrological modeling framework for sequential data assimilation, so called MPI-OHyMoS. MPI-OHyMoS allows user to develop his/her own element models and to easily build a total simulation system model for hydrological simulations. Unlike process-based modeling framework, this software framework benefits from its object-oriented feature to flexibly represent hydrological processes without any change of the main library. Sequential data assimilation based on the particle filters is available for any hydrologic models based on MPI-OHyMoS considering various sources of uncertainty originated from input forcing, parameters and observations. The particle filters are a Bayesian learning process in which the propagation of all uncertainties is carried out by a suitable selection of randomly generated particles without any assumptions about the nature of the distributions. In MPI-OHyMoS, ensemble simulations are parallelized, which can take advantage of high performance computing (HPC) system. We applied this software framework for short-term streamflow forecasting of several catchments in Japan using a distributed hydrologic model. Uncertainty of model parameters and remotely-sensed rainfall data such as X-band or C-band radar is estimated and mitigated in the sequential data assimilation.

  10. Two-IMU FDI performance of the sequential probability ratio test during shuttle entry

    NASA Technical Reports Server (NTRS)

    Rich, T. M.

    1976-01-01

    Performance data for the sequential probability ratio test (SPRT) during shuttle entry are presented. Current modeling constants and failure thresholds are included for the full mission 3B from entry through landing trajectory. Minimum 100 percent detection/isolation failure levels and a discussion of the effects of failure direction are presented. Finally, a limited comparison of failures introduced at trajectory initiation shows that the SPRT algorithm performs slightly worse than the data tracking test.

  11. Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall

    1998-01-01

    The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.

  12. REML/BLUP and sequential path analysis in estimating genotypic values and interrelationships among simple maize grain yield-related traits.

    PubMed

    Olivoto, T; Nardino, M; Carvalho, I R; Follmann, D N; Ferrari, M; Szareski, V J; de Pelegrin, A J; de Souza, V Q

    2017-03-22

    Methodologies using restricted maximum likelihood/best linear unbiased prediction (REML/BLUP) in combination with sequential path analysis in maize are still limited in the literature. Therefore, the aims of this study were: i) to use REML/BLUP-based procedures in order to estimate variance components, genetic parameters, and genotypic values of simple maize hybrids, and ii) to fit stepwise regressions considering genotypic values to form a path diagram with multi-order predictors and minimum multicollinearity that explains the relationships of cause and effect among grain yield-related traits. Fifteen commercial simple maize hybrids were evaluated in multi-environment trials in a randomized complete block design with four replications. The environmental variance (78.80%) and genotype-vs-environment variance (20.83%) accounted for more than 99% of the phenotypic variance of grain yield, which difficult the direct selection of breeders for this trait. The sequential path analysis model allowed the selection of traits with high explanatory power and minimum multicollinearity, resulting in models with elevated fit (R 2 > 0.9 and ε < 0.3). The number of kernels per ear (NKE) and thousand-kernel weight (TKW) are the traits with the largest direct effects on grain yield (r = 0.66 and 0.73, respectively). The high accuracy of selection (0.86 and 0.89) associated with the high heritability of the average (0.732 and 0.794) for NKE and TKW, respectively, indicated good reliability and prospects of success in the indirect selection of hybrids with high-yield potential through these traits. The negative direct effect of NKE on TKW (r = -0.856), however, must be considered. The joint use of mixed models and sequential path analysis is effective in the evaluation of maize-breeding trials.

  13. Statistical Emulator for Expensive Classification Simulators

    NASA Technical Reports Server (NTRS)

    Ross, Jerret; Samareh, Jamshid A.

    2016-01-01

    Expensive simulators prevent any kind of meaningful analysis to be performed on the phenomena they model. To get around this problem the concept of using a statistical emulator as a surrogate representation of the simulator was introduced in the 1980's. Presently, simulators have become more and more complex and as a result running a single example on these simulators is very expensive and can take days to weeks or even months. Many new techniques have been introduced, termed criteria, which sequentially select the next best (most informative to the emulator) point that should be run on the simulator. These criteria methods allow for the creation of an emulator with only a small number of simulator runs. We follow and extend this framework to expensive classification simulators.

  14. Accelerating Sequential Gaussian Simulation with a constant path

    NASA Astrophysics Data System (ADS)

    Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus

    2018-03-01

    Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.

  15. Hemodynamic analysis of sequential graft from right coronary system to left coronary system.

    PubMed

    Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun

    2016-12-28

    Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.

  16. Magnetometer-only attitude and angular velocity filtering estimation for attitude changing spacecraft

    NASA Astrophysics Data System (ADS)

    Ma, Hongliang; Xu, Shijie

    2014-09-01

    This paper presents an improved real-time sequential filter (IRTSF) for magnetometer-only attitude and angular velocity estimation of spacecraft during its attitude changing (including fast and large angular attitude maneuver, rapidly spinning or uncontrolled tumble). In this new magnetometer-only attitude determination technique, both attitude dynamics equation and first time derivative of measured magnetic field vector are directly leaded into filtering equations based on the traditional single vector attitude determination method of gyroless and real-time sequential filter (RTSF) of magnetometer-only attitude estimation. The process noise model of IRTSF includes attitude kinematics and dynamics equations, and its measurement model consists of magnetic field vector and its first time derivative. The observability of IRTSF for small or large angular velocity changing spacecraft is evaluated by an improved Lie-Differentiation, and the degrees of observability of IRTSF for different initial estimation errors are analyzed by the condition number and a solved covariance matrix. Numerical simulation results indicate that: (1) the attitude and angular velocity of spacecraft can be estimated with sufficient accuracy using IRTSF from magnetometer-only data; (2) compared with that of RTSF, the estimation accuracies and observability degrees of attitude and angular velocity using IRTSF from magnetometer-only data are both improved; and (3) universality: the IRTSF of magnetometer-only attitude and angular velocity estimation is observable for any different initial state estimation error vector.

  17. Framing attention in Japanese and american comics: cross-cultural differences in attentional structure.

    PubMed

    Cohn, Neil; Taylor-Weiner, Amaro; Grossman, Suzanne

    2012-01-01

    Research on visual attention has shown that Americans tend to focus more on focal objects of a scene while Asians attend to the surrounding environment. The panels of comic books - the narrative frames in sequential images - highlight aspects of a scene comparably to how attention becomes focused on parts of a spatial array. Thus, we compared panels from American and Japanese comics to explore cross-cultural cognition beyond behavioral experimentation by looking at the expressive mediums produced by individuals from these cultures. This study compared the panels of two genres of American comics (Independent and Mainstream comics) with mainstream Japanese "manga" to examine how different cultures and genres direct attention through the framing of figures and scenes in comic panels. Both genres of American comics focused on whole scenes as much as individual characters, while Japanese manga individuated characters and parts of scenes. We argue that this framing of space from American and Japanese comic books simulate a viewer's integration of a visual scene, and is consistent with the research showing cross-cultural differences in the direction of attention.

  18. Quantum switching of π-electron rotations in a nonplanar chiral molecule by using linearly polarized UV laser pulses.

    PubMed

    Mineo, Hirobumi; Yamaki, Masahiro; Teranishi, Yoshiaki; Hayashi, Michitoshi; Lin, Sheng Hsien; Fujimura, Yuichi

    2012-09-05

    Nonplanar chiral aromatic molecules are candidates for use as building blocks of multidimensional switching devices because the π electrons can generate ring currents with a variety of directions. We employed (P)-2,2'-biphenol because four patterns of π-electron rotations along the two phenol rings are possible and theoretically determine how quantum switching of the π-electron rotations can be realized. We found that each rotational pattern can be driven by a coherent excitation of two electronic states under two conditions: one is the symmetry of the electronic states and the other is their relative phase. On the basis of the results of quantum dynamics simulations, we propose a quantum control method for sequential switching among the four rotational patterns that can be performed by using ultrashort overlapped pump and dump pulses with properly selected relative phases and photon polarization directions. The results serve as a theoretical basis for the design of confined ultrafast switching of ring currents of nonplanar molecules and further current-induced magnetic fluxes of more sophisticated systems.

  19. Framing Attention in Japanese and American Comics: Cross-Cultural Differences in Attentional Structure

    PubMed Central

    Cohn, Neil; Taylor-Weiner, Amaro; Grossman, Suzanne

    2012-01-01

    Research on visual attention has shown that Americans tend to focus more on focal objects of a scene while Asians attend to the surrounding environment. The panels of comic books – the narrative frames in sequential images – highlight aspects of a scene comparably to how attention becomes focused on parts of a spatial array. Thus, we compared panels from American and Japanese comics to explore cross-cultural cognition beyond behavioral experimentation by looking at the expressive mediums produced by individuals from these cultures. This study compared the panels of two genres of American comics (Independent and Mainstream comics) with mainstream Japanese “manga” to examine how different cultures and genres direct attention through the framing of figures and scenes in comic panels. Both genres of American comics focused on whole scenes as much as individual characters, while Japanese manga individuated characters and parts of scenes. We argue that this framing of space from American and Japanese comic books simulate a viewer’s integration of a visual scene, and is consistent with the research showing cross-cultural differences in the direction of attention. PMID:23015794

  20. Movement plans for posture selection do not transfer across hands

    PubMed Central

    Schütz, Christoph; Schack, Thomas

    2015-01-01

    In a sequential task, the grasp postures people select depend on their movement history. This motor hysteresis effect results from the reuse of former movement plans and reduces the cognitive cost of movement planning. Movement plans for hand trajectories not only transfer across successive trials, but also across hands. We therefore asked whether such a transfer would also be found in movement plans for hand postures. To this end, we designed a sequential, continuous posture selection task. Participants had to open a column of drawers with cylindrical knobs in ascending and descending sequences. A hand switch was required in each sequence. Hand pro/supination was analyzed directly before and after the hand switch. Results showed that hysteresis effects were present directly before, but absent directly after the hand switch. This indicates that, in the current study, movement plans for hand postures only transfer across trials, but not across hands. PMID:26441734

  1. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  2. Sequential Monte Carlo for inference of latent ARMA time-series with innovations correlated in time

    NASA Astrophysics Data System (ADS)

    Urteaga, Iñigo; Bugallo, Mónica F.; Djurić, Petar M.

    2017-12-01

    We consider the problem of sequential inference of latent time-series with innovations correlated in time and observed via nonlinear functions. We accommodate time-varying phenomena with diverse properties by means of a flexible mathematical representation of the data. We characterize statistically such time-series by a Bayesian analysis of their densities. The density that describes the transition of the state from time t to the next time instant t+1 is used for implementation of novel sequential Monte Carlo (SMC) methods. We present a set of SMC methods for inference of latent ARMA time-series with innovations correlated in time for different assumptions in knowledge of parameters. The methods operate in a unified and consistent manner for data with diverse memory properties. We show the validity of the proposed approach by comprehensive simulations of the challenging stochastic volatility model.

  3. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  4. Influence of Multidimensionality on Convergence of Sampling in Protein Simulation

    NASA Astrophysics Data System (ADS)

    Metsugi, Shoichi

    2005-06-01

    We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

  5. Dark sequential Z ' portal: Collider and direct detection experiments

    NASA Astrophysics Data System (ADS)

    Arcadi, Giorgio; Campos, Miguel D.; Lindner, Manfred; Masiero, Antonio; Queiroz, Farinaldo S.

    2018-02-01

    We revisit the status of a Majorana fermion as a dark matter candidate when a sequential Z' gauge boson dictates the dark matter phenomenology. Direct dark matter detection signatures rise from dark matter-nucleus scatterings at bubble chamber and liquid xenon detectors, and from the flux of neutrinos from the Sun measured by the IceCube experiment, which is governed by the spin-dependent dark matter-nucleus scattering. On the collider side, LHC searches for dilepton and monojet + missing energy signals play an important role. The relic density and perturbativity requirements are also addressed. By exploiting the dark matter complementarity we outline the region of parameter space where one can successfully have a Majorana dark matter particle in light of current and planned experimental sensitivities.

  6. Formation of iron nanoparticles and increase in iron reactivity in mineral dust during simulated cloud processing.

    PubMed

    Shi, Zongbo; Krom, Michael D; Bonneville, Steeve; Baker, Alex R; Jickells, Timothy D; Benning, Liane G

    2009-09-01

    The formation of iron (Fe) nanoperticles and increase in Fe reactivity in mineral dust during simulated cloud processing was investigated using high-resolution microscopy and chemical extraction methods. Cloud processing of dust was experimentally simulated via an alternation of acidic (pH 2) and circumneutral conditions (pH 5-6) over periods of 24 h each on presieved (<20 microm) Saharan soil and goethite suspensions. Microscopic analyses of the processed soil and goethite samples reveal the neo-formation of Fe-rich nanoparticle aggregates, which were not found initially. Similar Fe-rich nanoparticles were also observed in wet-deposited Saharen dusts from the western Mediterranean but not in dry-deposited dust from the eastern Mediterranean. Sequential Fe extraction of the soil samples indicated an increase in the proportion of chemically reactive Fe extractable by an ascorbate solution after simulated cloud processing. In addition, the sequential extractions on the Mediterranean dust samples revealed a higher content of reactive Fe in the wet-deposited dust compared to that of the dry-deposited dust These results suggestthat large variations of pH commonly reported in aerosol and cloud waters can trigger neo-formation of nanosize Fe particles and an increase in Fe reactivity in the dust

  7. A Molecular Dynamics-Quantum Mechanics Theoretical Study of DNA-Mediated Charge Transport in Hydrated Ionic Liquids.

    PubMed

    Meng, Zhenyu; Kubar, Tomas; Mu, Yuguang; Shao, Fangwei

    2018-05-08

    Charge transport (CT) through biomolecules is of high significance in the research fields of biology, nanotechnology, and molecular devices. Inspired by our previous work that showed the binding of ionic liquid (IL) facilitated charge transport in duplex DNA, in silico simulation is a useful means to understand the microscopic mechanism of the facilitation phenomenon. Here molecular dynamics simulations (MD) of duplex DNA in water and hydrated ionic liquids were employed to explore the helical parameters. Principal component analysis was further applied to capture the subtle conformational changes of helical DNA upon different environmental impacts. Sequentially, CT rates were calculated by a QM/MM simulation of the flickering resonance model based upon MD trajectories. Herein, MD simulation illustrated that the binding of ionic liquids can restrain dynamic conformation and lower the on-site energy of the DNA base. Confined movement among the adjacent base pairs was highly related to the increase of electronic coupling among base pairs, which may lead DNA to a CT facilitated state. Sequentially combining MD and QM/MM analysis, the rational correlations among the binding modes, the conformational changes, and CT rates illustrated the facilitation effects from hydrated IL on DNA CT and supported a conformational-gating mechanism.

  8. Sequential Double lonization: The Timing of Release

    NASA Astrophysics Data System (ADS)

    Pfeiffer, A.

    2011-05-01

    The timing of electron release in strong field double ionization poses great challenges both for conceptual definition and for conducting experimental measurement. Here we present coincidence momentum measurements of the doubly charged ion and of the two electrons arising from double ionization of Argon using elliptically (close to circularly) polarized laser pulses. Based on a semi-classical model, the ionization times are calculated from the measured electron momenta across a large intensity range. Exploiting the attoclock technique we have direct access to timings on a coarse and on a fine scale, similar to the hour and the minute hand of a clock. In our attoclock, the magnitude of the electron momenta follows the envelope of the laser pulse and gives a coarse timing for the electron releases (the hour hand), while the fine timing (the minute hand) is provided by the emission angle of the electrons. The first of our findings is that due to depletion the averaged ionization time moves towards the beginning of the pulse with increasing intensity, confirming the results of Maharjan et al., and that the ion momentum distribution projected onto the minor polarization axis shows a bifurcation from a 3-peak to a 4-peak structure. This effect can be fully understood by modeling the process semi-classically in the independent electron approximation following the simple man's model. The ionization time measurement performed with the attoclock shows that the release time of the first electron is in good agreement with the semi-classical simulation performed on the basis of Sequential Double lonization (SDI), whereas the ionization of the second electron occurs significantly earlier than predicted. This observation suggests that electron correlation and other Non-Sequential Double lonization (NSDI) mechanisms may play an important role also in the case of strong field double ionization by close-to-circularly polarized laser pulses. The timing of electron release in strong field double ionization poses great challenges both for conceptual definition and for conducting experimental measurement. Here we present coincidence momentum measurements of the doubly charged ion and of the two electrons arising from double ionization of Argon using elliptically (close to circularly) polarized laser pulses. Based on a semi-classical model, the ionization times are calculated from the measured electron momenta across a large intensity range. Exploiting the attoclock technique we have direct access to timings on a coarse and on a fine scale, similar to the hour and the minute hand of a clock. In our attoclock, the magnitude of the electron momenta follows the envelope of the laser pulse and gives a coarse timing for the electron releases (the hour hand), while the fine timing (the minute hand) is provided by the emission angle of the electrons. The first of our findings is that due to depletion the averaged ionization time moves towards the beginning of the pulse with increasing intensity, confirming the results of Maharjan et al., and that the ion momentum distribution projected onto the minor polarization axis shows a bifurcation from a 3-peak to a 4-peak structure. This effect can be fully understood by modeling the process semi-classically in the independent electron approximation following the simple man's model. The ionization time measurement performed with the attoclock shows that the release time of the first electron is in good agreement with the semi-classical simulation performed on the basis of Sequential Double lonization (SDI), whereas the ionization of the second electron occurs significantly earlier than predicted. This observation suggests that electron correlation and other Non-Sequential Double lonization (NSDI) mechanisms may play an important role also in the case of strong field double ionization by close-to-circularly polarized laser pulses. In collaboration with C. Cirelli and M. Smolarski, Physics Department, ETH Zurich, 8093 Zurich, Switzerland; R. Doerner, Institut fiir Kernphysik, Johann Wolfgang Goethe Universitat, 60438 Frankfurt am Main, Germany; and U. Keller, ETH Zurich.

  9. Mechanisms of electron acceptor utilization: Implications for simulating anaerobic biodegradation

    USGS Publications Warehouse

    Schreiber, M.E.; Carey, G.R.; Feinstein, D.T.; Bahr, J.M.

    2004-01-01

    Simulation of biodegradation reactions within a reactive transport framework requires information on mechanisms of terminal electron acceptor processes (TEAPs). In initial modeling efforts, TEAPs were approximated as occurring sequentially, with the highest energy-yielding electron acceptors (e.g. oxygen) consumed before those that yield less energy (e.g., sulfate). Within this framework in a steady state plume, sequential electron acceptor utilization would theoretically produce methane at an organic-rich source and Fe(II) further downgradient, resulting in a limited zone of Fe(II) and methane overlap. However, contaminant plumes often display much more extensive zones of overlapping Fe(II) and methane. The extensive overlap could be caused by several abiotic and biotic processes including vertical mixing of byproducts in long-screened monitoring wells, adsorption of Fe(II) onto aquifer solids, or microscale heterogeneity in Fe(III) concentrations. Alternatively, the overlap could be due to simultaneous utilization of terminal electron acceptors. Because biodegradation rates are controlled by TEAPs, evaluating the mechanisms of electron acceptor utilization is critical for improving prediction of contaminant mass losses due to biodegradation. Using BioRedox-MT3DMS, a three-dimensional, multi-species reactive transport code, we simulated the current configurations of a BTEX plume and TEAP zones at a petroleum- contaminated field site in Wisconsin. Simulation results suggest that BTEX mass loss due to biodegradation is greatest under oxygen-reducing conditions, with smaller but similar contributions to mass loss from biodegradation under Fe(III)-reducing, sulfate-reducing, and methanogenic conditions. Results of sensitivity calculations document that BTEX losses due to biodegradation are most sensitive to the age of the plume, while the shape of the BTEX plume is most sensitive to effective porosity and rate constants for biodegradation under Fe(III)-reducing and methanogenic conditions. Using this transport model, we had limited success in simulating overlap of redox products using reasonable ranges of parameters within a strictly sequential electron acceptor utilization framework. Simulation results indicate that overlap of redox products cannot be accurately simulated using the constructed model, suggesting either that Fe(III) reduction and methanogenesis are occurring simultaneously in the source area, or that heterogeneities in Fe(III) concentration and/or mineral type cause the observed overlap. Additional field, experimental, and modeling studies will be needed to address these questions. ?? 2004 Elsevier B.V. All rights reserved.

  10. AEROSOL TRANSPORT AND DEPOSITION IN SEQUENTIALLY BIFURCATING AIRWAYS

    EPA Science Inventory

    Deposition patterns and efficiencies of a dilute suspension of inhaled particles in three-dimensional double bifurcating airway models for both in-plane and 90 deg out-of-plane configurations have been numerically simulated assuming steady, laminar, constant-property air flow wit...

  11. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  12. The use of sequential extraction to evaluate the remediation potential of heavy metals from contaminated harbour sediment

    NASA Astrophysics Data System (ADS)

    Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.

    2003-05-01

    In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.

  13. Metal Big Area Additive Manufacturing: Process Modeling and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W

    Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long rangemore » effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress analysis to evaluate the residual stresses and distortions. In this formulation, we assume that physics is directionally coupled, i.e. the effect of stress of the component on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less

  14. Direct Numerical Simulation of Turbulent Multi-Stage Autoignition Relevant to Engine Conditions

    NASA Astrophysics Data System (ADS)

    Chen, Jacqueline

    2017-11-01

    Due to the unrivaled energy density of liquid hydrocarbon fuels combustion will continue to provide over 80% of the world's energy for at least the next fifty years. Hence, combustion needs to be understood and controlled to optimize combustion systems for efficiency to prevent further climate change, to reduce emissions and to ensure U.S. energy security. In this talk I will discuss recent progress in direct numerical simulations of turbulent combustion focused on providing fundamental insights into key `turbulence-chemistry' interactions that underpin the development of next generation fuel efficient, fuel flexible engines for transportation and power generation. Petascale direct numerical simulation (DNS) of multi-stage mixed-mode turbulent combustion in canonical configurations have elucidated key physics that govern autoignition and flame stabilization in engines and provide benchmark data for combustion model development under the conditions of advanced engines which operate near combustion limits to maximize efficiency and minimize emissions. Mixed-mode combustion refers to premixed or partially-premixed flames propagating into stratified autoignitive mixtures. Multi-stage ignition refers to hydrocarbon fuels with negative temperature coefficient behavior that undergo sequential low- and high-temperature autoignition. Key issues that will be discussed include: 1) the role of mixing in shear driven turbulence on the dynamics of multi-stage autoignition and cool flame propagation in diesel environments, 2) the role of thermal and composition stratification on the evolution of the balance of mixed combustion modes - flame propagation versus spontaneous ignition - which determines the overall combustion rate in autoignition processes, and 3) the role of cool flames on lifted flame stabilization. Finally prospects for DNS of turbulent combustion at the exascale will be discussed in the context of anticipated heterogeneous machine architectures. sponsored by DOE Office of Basic Energy Sciences and computing resources provided by the Oakridge Leadership Computing Facility through the DOE INCITE Program.

  15. specsim: A Fortran-77 program for conditional spectral simulation in 3D

    NASA Astrophysics Data System (ADS)

    Yao, Tingting

    1998-12-01

    A Fortran 77 program, specsim, is presented for conditional spectral simulation in 3D domains. The traditional Fourier integral method allows generating random fields with a given covariance spectrum. Conditioning to local data is achieved by an iterative identification of the conditional phase information. A flowchart of the program is given to illustrate the implementation procedures of the program. A 3D case study is presented to demonstrate application of the program. A comparison with the traditional sequential Gaussian simulation algorithm emphasizes the advantages and drawbacks of the proposed algorithm.

  16. Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators

    PubMed Central

    Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew

    2014-01-01

    Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Damiani, Rick R

    This poster summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between two modeling approaches (fully coupled and sequentially coupled) through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.

  18. Modelling language evolution: Examples and predictions

    NASA Astrophysics Data System (ADS)

    Gong, Tao; Shuai, Lan; Zhang, Menghan

    2014-06-01

    We survey recent computer modelling research of language evolution, focusing on a rule-based model simulating the lexicon-syntax coevolution and an equation-based model quantifying the language competition dynamics. We discuss four predictions of these models: (a) correlation between domain-general abilities (e.g. sequential learning) and language-specific mechanisms (e.g. word order processing); (b) coevolution of language and relevant competences (e.g. joint attention); (c) effects of cultural transmission and social structure on linguistic understandability; and (d) commonalities between linguistic, biological, and physical phenomena. All these contribute significantly to our understanding of the evolutions of language structures, individual learning mechanisms, and relevant biological and socio-cultural factors. We conclude the survey by highlighting three future directions of modelling studies of language evolution: (a) adopting experimental approaches for model evaluation; (b) consolidating empirical foundations of models; and (c) multi-disciplinary collaboration among modelling, linguistics, and other relevant disciplines.

  19. Orbit control of a stratospheric satellite with parameter uncertainties

    NASA Astrophysics Data System (ADS)

    Xu, Ming; Huo, Wei

    2016-12-01

    When a stratospheric satellite travels by prevailing winds in the stratosphere, its cross-track displacement needs to be controlled to keep a constant latitude orbital flight. To design the orbit control system, a 6 degree-of-freedom (DOF) model of the satellite is established based on the second Lagrangian formulation, it is proven that the input/output feedback linearization theory cannot be directly implemented for the orbit control with this model, thus three subsystem models are deduced from the 6-DOF model to develop a sequential nonlinear control strategy. The control strategy includes an adaptive controller for the balloon-tether subsystem with uncertain balloon parameters, a PD controller based on feedback linearization for the tether-sail subsystem, and a sliding mode controller for the sail-rudder subsystem with uncertain sail parameters. Simulation studies demonstrate that the proposed control strategy is robust to uncertainties and satisfies high precision requirements for the orbit flight of the satellite.

  20. Nonbolometric bottleneck in electron-phonon relaxation in ultrathin WSi films

    NASA Astrophysics Data System (ADS)

    Sidorova, Mariia V.; Kozorezov, A. G.; Semenov, A. V.; Korneeva, Yu. P.; Mikhailov, M. Yu.; Devizenko, A. Yu.; Korneev, A. A.; Chulkova, G. M.; Goltsman, G. N.

    2018-05-01

    We developed the model of the internal phonon bottleneck to describe the energy exchange between the acoustically soft ultrathin metal film and acoustically rigid substrate. Discriminating phonons in the film into two groups, escaping and nonescaping, we show that electrons and nonescaping phonons may form a unified subsystem, which is cooled down only due to interactions with escaping phonons, either due to direct phonon conversion or indirect sequential interaction with an electronic system. Using an amplitude-modulated absorption of the sub-THz radiation technique, we studied electron-phonon relaxation in ultrathin disordered films of tungsten silicide. We found an experimental proof of the internal phonon bottleneck. The experiment and simulation based on the proposed model agree well, resulting in τe -ph˜14 0 -19 0 ps at TC=3.4 K , supporting the results of earlier measurements by independent techniques.

  1. Atomic-scale models of early-stage alkali depletion and SiO2-rich gel formation in bioactive glasses.

    PubMed

    Tilocca, Antonio

    2015-01-28

    Molecular dynamics simulations of Na(+)/H(+)-exchanged 45S5 Bioglass® models reveal that a large fraction of the hydroxyl groups introduced into the proton-exchanged, hydrated glass structure do not initially form covalent bonds with Si and P network formers but remain free and stabilised by the modifier metal cations, whereas substantial Si-OH and P-OH bonding is observed only at higher Na(+)/H(+) exchange levels. The strong affinity between free OH groups and modifier cations in the highly fragmented 45S5 glass structure appears to represent the main driving force for this effect. This suggests an alternative direct route for the formation of a repolymerised silica-rich gel in the early stages of the bioactive mechanism, not considered before, which does not require sequential repeated breakings of Si-O-Si bonds and silanol condensations.

  2. Retrospective revaluation in sequential decision making: a tale of two systems.

    PubMed

    Gershman, Samuel J; Markman, Arthur B; Otto, A Ross

    2014-02-01

    Recent computational theories of decision making in humans and animals have portrayed 2 systems locked in a battle for control of behavior. One system--variously termed model-free or habitual--favors actions that have previously led to reward, whereas a second--called the model-based or goal-directed system--favors actions that causally lead to reward according to the agent's internal model of the environment. Some evidence suggests that control can be shifted between these systems using neural or behavioral manipulations, but other evidence suggests that the systems are more intertwined than a competitive account would imply. In 4 behavioral experiments, using a retrospective revaluation design and a cognitive load manipulation, we show that human decisions are more consistent with a cooperative architecture in which the model-free system controls behavior, whereas the model-based system trains the model-free system by replaying and simulating experience.

  3. Contribution of double scattering to structural coloration in quasiordered nanostructures of bird feathers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noh, Heeso; Liew, Seng Fatt; Saranathan, Vinodkumar

    2010-07-28

    We measured the polarization- and angle-resolved optical scattering and reflection spectra of the quasiordered nanostructures in the bird feather barbs. In addition to the primary peak that originates from single scattering, we observed a secondary peak which exhibits depolarization and distinct angular dispersion. We explained the secondary peak in terms of double scattering, i.e., light is scattered successively twice by the structure. The two sequential single-scattering events are considered uncorrelated. Using the Fourier power spectra of the nanostructures obtained from the small-angle x-ray scattering experiment, we calculated the double scattering of light in various directions. The double-scattering spectrum is broadermore » than the single-scattering spectrum, and it splits into two subpeaks at larger scattering angle. The good agreement between the simulation results and the experimental data confirms that double scattering of light makes a significant contribution to the structural color.« less

  4. High-Order Multioperator Compact Schemes for Numerical Simulation of Unsteady Subsonic Airfoil Flow

    NASA Astrophysics Data System (ADS)

    Savel'ev, A. D.

    2018-02-01

    On the basis of high-order schemes, the viscous gas flow over the NACA2212 airfoil is numerically simulated at a free-stream Mach number of 0.3 and Reynolds numbers ranging from 103 to 107. Flow regimes sequentially varying due to variations in the free-stream viscosity are considered. Vortex structures developing on the airfoil surface are investigated, and a physical interpretation of this phenomenon is given.

  5. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  6. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  7. Numerical modeling of flow and transport in the far-field of a generic nuclear waste repository in fractured crystalline rock using updated fracture continuum model

    NASA Astrophysics Data System (ADS)

    Hadgu, T.; Kalinina, E.; Klise, K. A.; Wang, Y.

    2016-12-01

    Disposal of high-level radioactive waste in a deep geological repository in crystalline host rock is one of the potential options for long term isolation. Characterization of the natural barrier system is an important component of the disposal option. In this study we present numerical modeling of flow and transport in fractured crystalline rock using an updated fracture continuum model (FCM). The FCM is a stochastic method that maps the permeability of discrete fractures onto a regular grid. The original method by McKenna and Reeves (2005) has been updated to provide capabilities that enhance representation of fractured rock. As reported in Hadgu et al. (2015) the method was first modified to include fully three-dimensional representations of anisotropic permeability, multiple independent fracture sets, and arbitrary fracture dips and orientations, and spatial correlation. More recently the FCM has been extended to include three different methods. (1) The Sequential Gaussian Simulation (SGSIM) method uses spatial correlation to generate fractures and define their properties for FCM (2) The ELLIPSIM method randomly generates a specified number of ellipses with properties defined by probability distributions. Each ellipse represents a single fracture. (3) Direct conversion of discrete fracture network (DFN) output. Test simulations were conducted to simulate flow and transport using ELLIPSIM and direct conversion of DFN methods. The simulations used a 1 km x 1km x 1km model domain and a structured with grid block of size of 10 m x 10m x 10m, resulting in a total of 106 grid blocks. Distributions of fracture parameters were used to generate a selected number of realizations. For each realization, the different methods were applied to generate representative permeability fields. The PFLOTRAN (Hammond et al., 2014) code was used to simulate flow and transport in the domain. Simulation results and analysis are presented. The results indicate that the FCM approach is a viable method to model fractured crystalline rocks. The FCM is a computationally efficient way to generate realistic representation of complex fracture systems. This approach is of interest for nuclear waste disposal models applied over large domains. SAND2016-7509 A

  8. Sequential evaporation of water molecules from protonated water clusters: measurement of the velocity distributions of the evaporated molecules and statistical analysis.

    PubMed

    Berthias, F; Feketeová, L; Abdoul-Carime, H; Calvo, F; Farizon, B; Farizon, M; Märk, T D

    2018-06-22

    Velocity distributions of neutral water molecules evaporated after collision induced dissociation of protonated water clusters H+(H2O)n≤10 were measured using the combined correlated ion and neutral fragment time-of-flight (COINTOF) and velocity map imaging (VMI) techniques. As observed previously, all measured velocity distributions exhibit two contributions, with a low velocity part identified by statistical molecular dynamics (SMD) simulations as events obeying the Maxwell-Boltzmann statistics and a high velocity contribution corresponding to non-ergodic events in which energy redistribution is incomplete. In contrast to earlier studies, where the evaporation of a single molecule was probed, the present study is concerned with events involving the evaporation of up to five water molecules. In particular, we discuss here in detail the cases of two and three evaporated molecules. Evaporation of several water molecules after CID can be interpreted in general as a sequential evaporation process. In addition to the SMD calculations, a Monte Carlo (MC) based simulation was developed allowing the reconstruction of the velocity distribution produced by the evaporation of m molecules from H+(H2O)n≤10 cluster ions using the measured velocity distributions for singly evaporated molecules as the input. The observed broadening of the low-velocity part of the distributions for the evaporation of two and three molecules as compared to the width for the evaporation of a single molecule results from the cumulative recoil velocity of the successive ion residues as well as the intrinsically broader distributions for decreasingly smaller parent clusters. Further MC simulations were carried out assuming that a certain proportion of non-ergodic events is responsible for the first evaporation in such a sequential evaporation series, thereby allowing to model the entire velocity distribution.

  9. Comparison of Sequential and Variational Data Assimilation

    NASA Astrophysics Data System (ADS)

    Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht

    2017-04-01

    Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.

  10. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  11. Differentiability of simulated MEG hippocampal, medial temporal and neocortical temporal epileptic spike activity.

    PubMed

    Stephen, Julia M; Ranken, Doug M; Aine, Cheryl J; Weisend, Michael P; Shih, Jerry J

    2005-12-01

    Previous studies have shown that magnetoencephalography (MEG) can measure hippocampal activity, despite the cylindrical shape and deep location in the brain. The current study extended this work by examining the ability to differentiate the hippocampal subfields, parahippocampal cortex, and neocortical temporal sources using simulated interictal epileptic activity. A model of the hippocampus was generated on the MRIs of five subjects. CA1, CA3, and dentate gyrus of the hippocampus were activated as well as entorhinal cortex, presubiculum, and neocortical temporal cortex. In addition, pairs of sources were activated sequentially to emulate various hypotheses of mesial temporal lobe seizure generation. The simulated MEG activity was added to real background brain activity from the five subjects and modeled using a multidipole spatiotemporal modeling technique. The waveforms and source locations/orientations for hippocampal and parahippocampal sources were differentiable from neocortical temporal sources. In addition, hippocampal and parahippocampal sources were differentiated to varying degrees depending on source. The sequential activation of hippocampal and parahippocampal sources was adequately modeled by a single source; however, these sources were not resolvable when they overlapped in time. These results suggest that MEG has the sensitivity to distinguish parahippocampal and hippocampal spike generators in mesial temporal lobe epilepsy.

  12. The Application of Neutron Transport Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.

    2015-02-01

    Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.

  13. Saltwater-freshwater mixing fluctuation in shallow beach aquifers

    NASA Astrophysics Data System (ADS)

    Han, Qiang; Chen, Daoyi; Guo, Yakun; Hu, Wulong

    2018-07-01

    Field measurements and numerical simulations demonstrate the existence of an upper saline plume in tidally dominated beaches. The effect of tides on the saltwater-freshwater mixing occurring at both the upper saline plume and lower salt wedge is well understood. However, it is poorly understood whether the tidal driven force acts equally on the mixing behaviours of above two regions and what factors control the mixing fluctuation features. In this study, variable-density, saturated-unsaturated, transient groundwater flow and solute transport numerical models are proposed and performed for saltwater-freshwater mixing subject to tidal forcing on a sloping beach. A range of tidal amplitude, fresh groundwater flux, hydraulic conductivity, beach slope and dispersivity anisotropy are simulated. Based on the time sequential salinity data, the gross mixing features are quantified by computing the spatial moments in three different aspects, namely, the centre point, length and width, and the volume (or area in a two-dimensional case). Simulated salinity distribution varies significantly at saltwater-freshwater interfaces. Mixing characteristics of the upper saline plume greatly differ from those in the salt wedge for both the transient and quasi-steady state. The mixing of the upper saline plume largely inherits the fluctuation characteristics of the sea tide in both the transverse and longitudinal directions when the quasi-steady state is reached. On the other hand, the mixing in the salt wedge is relatively steady and shows little fluctuation. The normalized mixing width and length, mixing volume and the fluctuation amplitude of the mass centre in the upper saline plume are, in general, one-magnitude-order larger than those in the salt wedge region. In the longitudinal direction, tidal amplitude, fresh groundwater flux, hydraulic conductivity and beach slope are significant control factors of fluctuation amplitude. In the transverse direction, tidal amplitude and beach slope are the main control parameters. Very small dispersivity anisotropy (e.g., αL /αT < 5) could greatly suppress mixing fluctuation in the longitudinal direction. This work underlines the close connection between the sea tides and the upper saline plume in the aspect of mixing, thereby enhancing understanding of the interplay between tidal oscillations and mixing mechanisms in tidally dominated sloping beach systems.

  14. Sequential and direct ionic excitation in the strong-field ionization of 1-butene molecules.

    PubMed

    Schell, Felix; Boguslavskiy, Andrey E; Schulz, Claus Peter; Patchkovskii, Serguei; Vrakking, Marc J J; Stolow, Albert; Mikosch, Jochen

    2018-05-18

    We study the Strong-Field Ionization (SFI) of the hydrocarbon 1-butene as a function of wavelength using photoion-photoelectron covariance and coincidence spectroscopy. We observe a striking transition in the fragment-associated photoelectron spectra: from a single Above Threshold Ionization (ATI) progression for photon energies less than the cation D0-D1 gap to two ATI progressions for a photon energy greater than this gap. For the first case, electronically excited cations are created by SFI populating the ground cationic state D0, followed by sequential post-ionization excitation. For the second case, direct sub-cycle SFI to the D1 excited cation state contributes significantly. Our experiments access ionization dynamics in a regime where strong-field and resonance-enhanced processes can interplay.

  15. 11-kW direct diode laser system with homogenized 55 × 20 mm2 Top-Hat intensity distribution

    NASA Astrophysics Data System (ADS)

    Köhler, Bernd; Noeske, Axel; Kindervater, Tobias; Wessollek, Armin; Brand, Thomas; Biesenbach, Jens

    2007-02-01

    In comparison with other laser systems diode lasers are characterized by a unique overall efficiency, a small footprint and high reliability. However, one major drawback of direct diode laser systems is the inhomogeneous intensity distribution in the far field. Furthermore the output power of current commercially available systems is limited to about 6 kW. We report on a diode laser system with 11 kW output power at a single wavelength of 940 nm aiming for customer specific large area treatment. To the best of our knowledge this is the highest output power reported so far for a direct diode laser system. In addition to the high output power the intensity distribution of the laser beam is homogenized in both axes leading to a 55 x 20 mm2 Top-Hat intensity profile at a working distance of 400 mm. Homogeneity of the intensity distribution is better than 90%. The intensity in the focal plane is 1 kW/cm2. We will present a detailed characterization of the laser system, including measurements of power, power stability and intensity distribution of the homogenized laser beam. In addition we will compare the experimental data with the results of non-sequential raytracing simulations.

  16. Holographic lens spectrum splitting photovoltaic system for increased diffuse collection and annual energy yield

    NASA Astrophysics Data System (ADS)

    Vorndran, Shelby D.; Wu, Yuechen; Ayala, Silvana; Kostuk, Raymond K.

    2015-09-01

    Concentrating and spectrum splitting photovoltaic (PV) modules have a limited acceptance angle and thus suffer from optical loss under off-axis illumination. This loss manifests itself as a substantial reduction in energy yield in locations where a significant portion of insulation is diffuse. In this work, a spectrum splitting PV system is designed to efficiently collect and convert light in a range of illumination conditions. The system uses a holographic lens to concentrate shortwavelength light onto a smaller, more expensive indium gallium phosphide (InGaP) PV cell. The high efficiency PV cell near the axis is surrounded with silicon (Si), a less expensive material that collects a broader portion of the solar spectrum. Under direct illumination, the device achieves increased conversion efficiency from spectrum splitting. Under diffuse illumination, the device collects light with efficiency comparable to a flat-panel Si module. Design of the holographic lens is discussed. Optical efficiency and power output of the module under a range of illumination conditions from direct to diffuse are simulated with non-sequential raytracing software. Using direct and diffuse Typical Metrological Year (TMY3) irradiance measurements, annual energy yield of the module is calculated for several installation sites. Energy yield of the spectrum splitting module is compared to that of a full flat-panel Si reference module.

  17. Sequentially Simulated Outcomes: Kind Experience versus Nontransparent Description

    ERIC Educational Resources Information Center

    Hogarth, Robin M.; Soyer, Emre

    2011-01-01

    Recently, researchers have investigated differences in decision making based on description and experience. We address the issue of when experience-based judgments of probability are more accurate than are those based on description. If description is well understood ("transparent") and experience is misleading ("wicked"), it…

  18. A discrete event modelling framework for simulation of long-term outcomes of sequential treatment strategies for ankylosing spondylitis.

    PubMed

    Tran-Duy, An; Boonen, Annelies; van de Laar, Mart A F J; Franke, Angelinus C; Severens, Johan L

    2011-12-01

    To develop a modelling framework which can simulate long-term quality of life, societal costs and cost-effectiveness as affected by sequential drug treatment strategies for ankylosing spondylitis (AS). Discrete event simulation paradigm was selected for model development. Drug efficacy was modelled as changes in disease activity (Bath Ankylosing Spondylitis Disease Activity Index (BASDAI)) and functional status (Bath Ankylosing Spondylitis Functional Index (BASFI)), which were linked to costs and health utility using statistical models fitted based on an observational AS cohort. Published clinical data were used to estimate drug efficacy and time to events. Two strategies were compared: (1) five available non-steroidal anti-inflammatory drugs (strategy 1) and (2) same as strategy 1 plus two tumour necrosis factor α inhibitors (strategy 2). 13,000 patients were followed up individually until death. For probability sensitivity analysis, Monte Carlo simulations were performed with 1000 sets of parameters sampled from the appropriate probability distributions. The models successfully generated valid data on treatments, BASDAI, BASFI, utility, quality-adjusted life years (QALYs) and costs at time points with intervals of 1-3 months during the simulation length of 70 years. Incremental cost per QALY gained in strategy 2 compared with strategy 1 was €35,186. At a willingness-to-pay threshold of €80,000, it was 99.9% certain that strategy 2 was cost-effective. The modelling framework provides great flexibility to implement complex algorithms representing treatment selection, disease progression and changes in costs and utilities over time of patients with AS. Results obtained from the simulation are plausible.

  19. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  20. Footprints of electron correlation in strong-field double ionization of Kr close to the sequential-ionization regime

    NASA Astrophysics Data System (ADS)

    Li, Xiaokai; Wang, Chuncheng; Yuan, Zongqiang; Ye, Difa; Ma, Pan; Hu, Wenhui; Luo, Sizuo; Fu, Libin; Ding, Dajun

    2017-09-01

    By combining kinematically complete measurements and a semiclassical Monte Carlo simulation we study the correlated-electron dynamics in the strong-field double ionization of Kr. Interestingly, we find that, as we step into the sequential-ionization regime, there are still signatures of correlation in the two-electron joint momentum spectrum and, more intriguingly, the scaling law of the high-energy tail is completely different from early predictions on the low-Z atom (He). These experimental observations are well reproduced by our generalized semiclassical model adapting a Green-Sellin-Zachor potential. It is revealed that the competition between the screening effect of inner-shell electrons and the Coulomb focusing of nuclei leads to a non-inverse-square central force, which twists the returned electron trajectory at the vicinity of the parent core and thus significantly increases the probability of hard recollisions between two electrons. Our results might have promising applications ranging from accurately retrieving atomic structures to simulating celestial phenomena in the laboratory.

  1. Towards autonomous neuroprosthetic control using Hebbian reinforcement learning.

    PubMed

    Mahmoudi, Babak; Pohlmeyer, Eric A; Prins, Noeline W; Geng, Shijia; Sanchez, Justin C

    2013-12-01

    Our goal was to design an adaptive neuroprosthetic controller that could learn the mapping from neural states to prosthetic actions and automatically adjust adaptation using only a binary evaluative feedback as a measure of desirability/undesirability of performance. Hebbian reinforcement learning (HRL) in a connectionist network was used for the design of the adaptive controller. The method combines the efficiency of supervised learning with the generality of reinforcement learning. The convergence properties of this approach were studied using both closed-loop control simulations and open-loop simulations that used primate neural data from robot-assisted reaching tasks. The HRL controller was able to perform classification and regression tasks using its episodic and sequential learning modes, respectively. In our experiments, the HRL controller quickly achieved convergence to an effective control policy, followed by robust performance. The controller also automatically stopped adapting the parameters after converging to a satisfactory control policy. Additionally, when the input neural vector was reorganized, the controller resumed adaptation to maintain performance. By estimating an evaluative feedback directly from the user, the HRL control algorithm may provide an efficient method for autonomous adaptation of neuroprosthetic systems. This method may enable the user to teach the controller the desired behavior using only a simple feedback signal.

  2. Sequential chemical-biological processes for the treatment of industrial wastewaters: review of recent progresses and critical assessment.

    PubMed

    Guieysse, Benoit; Norvill, Zane N

    2014-02-28

    When direct wastewater biological treatment is unfeasible, a cost- and resource-efficient alternative to direct chemical treatment consists of combining biological treatment with a chemical pre-treatment aiming to convert the hazardous pollutants into more biodegradable compounds. Whereas the principles and advantages of sequential treatment have been demonstrated for a broad range of pollutants and process configurations, recent progresses (2011-present) in the field provide the basis for refining assessment of feasibility, costs, and environmental impacts. This paper thus reviews recent real wastewater demonstrations at pilot and full scale as well as new process configurations. It also discusses new insights on the potential impacts of microbial community dynamics on process feasibility, design and operation. Finally, it sheds light on a critical issue that has not yet been properly addressed in the field: integration requires complex and tailored optimization and, of paramount importance to full-scale application, is sensitive to uncertainty and variability in the inputs used for process design and operation. Future research is therefore critically needed to improve process control and better assess the real potential of sequential chemical-biological processes for industrial wastewater treatment. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Solving the infeasible trust-region problem using approximations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renaud, John E.; Perez, Victor M.; Eldred, Michael Scott

    2004-07-01

    The use of optimization in engineering design has fueled the development of algorithms for specific engineering needs. When the simulations are expensive to evaluate or the outputs present some noise, the direct use of nonlinear optimizers is not advisable, since the optimization process will be expensive and may result in premature convergence. The use of approximations for both cases is an alternative investigated by many researchers including the authors. When approximations are present, a model management is required for proper convergence of the algorithm. In nonlinear programming, the use of trust-regions for globalization of a local algorithm has been provenmore » effective. The same approach has been used to manage the local move limits in sequential approximate optimization frameworks as in Alexandrov et al., Giunta and Eldred, Perez et al. , Rodriguez et al., etc. The experience in the mathematical community has shown that more effective algorithms can be obtained by the specific inclusion of the constraints (SQP type of algorithms) rather than by using a penalty function as in the augmented Lagrangian formulation. The presence of explicit constraints in the local problem bounded by the trust region, however, may have no feasible solution. In order to remedy this problem the mathematical community has developed different versions of a composite steps approach. This approach consists of a normal step to reduce the amount of constraint violation and a tangential step to minimize the objective function maintaining the level of constraint violation attained at the normal step. Two of the authors have developed a different approach for a sequential approximate optimization framework using homotopy ideas to relax the constraints. This algorithm called interior-point trust-region sequential approximate optimization (IPTRSAO) presents some similarities to the two normal-tangential steps algorithms. In this paper, a description of the similarities is presented and an expansion of the two steps algorithm is presented for the case of approximations.« less

  4. Geostatistical mapping of effluent-affected sediment distribution on the Palos Verdes shelf

    USGS Publications Warehouse

    Murray, C.J.; Lee, H.J.; Hampton, M.A.

    2002-01-01

    Geostatistical techniques were used to study the spatial continuity of the thickness of effluent-affected sediment in the offshore Palos Verdes Margin area. The thickness data were measured directly from cores and indirectly from high-frequency subbottom profiles collected over the Palos Verdes Margin. Strong spatial continuity of the sediment thickness data was identified, with a maximum range of correlation in excess of 1.4 km. The spatial correlation showed a marked anisotropy, and was more than twice as continuous in the alongshore direction as in the cross-shelf direction. Sequential indicator simulation employing models fit to the thickness data variograms was used to map the distribution of the sediment, and to quantify the uncertainty in those estimates. A strong correlation between sediment thickness data and measurements of the mass of the contaminant p,p???-DDE per unit area was identified. A calibration based on the bivariate distribution of the thickness and p,p???-DDE data was applied using Markov-Bayes indicator simulation to extend the geostatistical study and map the contamination levels in the sediment. Integrating the map grids produced by the geostatistical study of the two variables indicated that 7.8 million m3 of effluent-affected sediment exist in the map area, containing approximately 61-72 Mg (metric tons) of p,p???-DDE. Most of the contaminated sediment (about 85% of the sediment and 89% of the p,p???-DDE) occurs in water depths < 100 m. The geostatistical study also indicated that the samples available for mapping are well distributed and the uncertainty of the estimates of the thickness and contamination level of the sediments is lowest in areas where the contaminated sediment is most prevalent. ?? 2002 Elsevier Science Ltd. All rights reserved.

  5. The COSMO-CLM 4.8 regional climate model coupled to regional ocean, land surface and global earth system models using OASIS3-MCT: description and performance

    NASA Astrophysics Data System (ADS)

    Will, Andreas; Akhtar, Naveed; Brauch, Jennifer; Breil, Marcus; Davin, Edouard; Ho-Hagemann, Ha T. M.; Maisonnave, Eric; Thürkow, Markus; Weiher, Stefan

    2017-04-01

    We developed a coupled regional climate system model based on the CCLM regional climate model. Within this model system, using OASIS3-MCT as a coupler, CCLM can be coupled to two land surface models (the Community Land Model (CLM) and VEG3D), the NEMO-MED12 regional ocean model for the Mediterranean Sea, two ocean models for the North and Baltic seas (NEMO-NORDIC and TRIMNP+CICE) and the MPI-ESM Earth system model.We first present the different model components and the unified OASIS3-MCT interface which handles all couplings in a consistent way, minimising the model source code modifications and defining the physical and numerical aspects of the couplings. We also address specific coupling issues like the handling of different domains, multiple usage of the MCT library and exchange of 3-D fields.We analyse and compare the computational performance of the different couplings based on real-case simulations over Europe. The usage of the LUCIA tool implemented in OASIS3-MCT enables the quantification of the contributions of the coupled components to the overall coupling cost. These individual contributions are (1) cost of the model(s) coupled, (2) direct cost of coupling including horizontal interpolation and communication between the components, (3) load imbalance, (4) cost of different usage of processors by CCLM in coupled and stand-alone mode and (5) residual cost including i.a. CCLM additional computations.Finally a procedure for finding an optimum processor configuration for each of the couplings was developed considering the time to solution, computing cost and parallel efficiency of the simulation. The optimum configurations are presented for sequential, concurrent and mixed (sequential+concurrent) coupling layouts. The procedure applied can be regarded as independent of the specific coupling layout and coupling details.We found that the direct cost of coupling, i.e. communications and horizontal interpolation, in OASIS3-MCT remains below 7 % of the CCLM stand-alone cost for all couplings investigated. This is in particular true for the exchange of 450 2-D fields between CCLM and MPI-ESM. We identified remaining limitations in the coupling strategies and discuss possible future improvements of the computational efficiency.

  6. Operative air temperature data for different measures applied on a building envelope in warm climate.

    PubMed

    Baglivo, Cristina; Congedo, Paolo Maria

    2018-04-01

    Several technical combinations have been evaluated in order to design high energy performance buildings for the warm climate. The analysis has been developed in several steps, avoiding the use of HVAC systems. The methodological approach of this study is based on a sequential search technique and it is shown on the paper entitled "Envelope Design Optimization by Thermal Modeling of a Building in a Warm Climate" [1]. The Operative Air Temperature trends (TOP), for each combination, have been plotted through a dynamic simulation performed using the software TRNSYS 17 (a transient system simulation program, University of Wisconsin, Solar Energy Laboratory, USA, 2010). Starting from the simplest building configuration consisting of 9 rooms (equal-sized modules of 5 × 5 m 2 ), the different building components are sequentially evaluated until the envelope design is optimized. The aim of this study is to perform a step-by-step simulation, simplifying as much as possible the model without making additional variables that can modify their performances. Walls, slab-on-ground floor, roof, shading and windows are among the simulated building components. The results are shown for each combination and evaluated for Brindisi, a city in southern Italy having 1083 degrees day, belonging to the national climatic zone C. The data show the trends of the TOP for each measure applied in the case study for a total of 17 combinations divided into eight steps.

  7. Structural characteristics of hydrated protons in the conductive channels: effects of confinement and fluorination studied by molecular dynamics simulation.

    PubMed

    Zhang, Ning; Song, Yuechun; Ruan, Xuehua; Yan, Xiaoming; Liu, Zhao; Shen, Zhuanglin; Wu, Xuemei; He, Gaohong

    2016-09-21

    The relationship between the proton conductive channel and the hydrated proton structure is of significant importance for understanding the deformed hydrogen bonding network of the confined protons which matches the nanochannel. In general, the structure of hydrated protons in the nanochannel of the proton exchange membrane is affected by several factors. To investigate the independent effect of each factor, it is necessary to eliminate the interference of other factors. In this paper, a one-dimensional carbon nanotube decorated with fluorine was built to investigate the independent effects of nanoscale confinement and fluorination on the structural properties of hydrated protons in the nanochannel using classical molecular dynamics simulation. In order to characterize the structure of hydrated protons confined in the channel, the hydrogen bonding interaction between water and the hydrated protons has been studied according to suitable hydrogen bond criteria. The hydrogen bond criteria were proposed based on the radial distribution function, angle distribution and pair-potential energy distribution. It was found that fluorination leads to an ordered hydrogen bonding structure of the hydrated protons near the channel surface, and confinement weakens the formation of the bifurcated hydrogen bonds in the radial direction. Besides, fluorination lowers the free energy barrier of hydronium along the nanochannel, but slightly increases the barrier for water. This leads to disintegration of the sequential hydrogen bond network in the fluorinated CNTs with small size. In the fluorinated CNTs with large diameter, the lower degree of confinement produces a spiral-like sequential hydrogen bond network with few bifurcated hydrogen bonds in the central region. This structure might promote unidirectional proton transfer along the channel without random movement. This study provides the cooperative effect of confinement dimension and fluorination on the structure and hydrogen bonding of the slightly acidic water in the nanoscale channel.

  8. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts’ law style assessment procedure

    PubMed Central

    2014-01-01

    Background Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Methods Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts’ law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. Results We validated the proposed methodology by achieving very high coefficients of determination for Fitts’ law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. Conclusions We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population. PMID:24886664

  9. A real-time comparison between direct control, sequential pattern recognition control and simultaneous pattern recognition control using a Fitts' law style assessment procedure.

    PubMed

    Wurth, Sophie M; Hargrove, Levi J

    2014-05-30

    Pattern recognition (PR) based strategies for the control of myoelectric upper limb prostheses are generally evaluated through offline classification accuracy, which is an admittedly useful metric, but insufficient to discuss functional performance in real time. Existing functional tests are extensive to set up and most fail to provide a challenging, objective framework to assess the strategy performance in real time. Nine able-bodied and two amputee subjects gave informed consent and participated in the local Institutional Review Board approved study. We designed a two-dimensional target acquisition task, based on the principles of Fitts' law for human motor control. Subjects were prompted to steer a cursor from the screen center of into a series of subsequently appearing targets of different difficulties. Three cursor control systems were tested, corresponding to three electromyography-based prosthetic control strategies: 1) amplitude-based direct control (the clinical standard of care), 2) sequential PR control, and 3) simultaneous PR control, allowing for a concurrent activation of two degrees of freedom (DOF). We computed throughput (bits/second), path efficiency (%), reaction time (second), and overshoot (%)) and used general linear models to assess significant differences between the strategies for each metric. We validated the proposed methodology by achieving very high coefficients of determination for Fitts' law. Both PR strategies significantly outperformed direct control in two-DOF targets and were more intuitive to operate. In one-DOF targets, the simultaneous approach was the least precise. The direct control was efficient in one-DOF targets but cumbersome to operate in two-DOF targets through a switch-depended sequential cursor control. We designed a test, capable of comprehensively describing prosthetic control strategies in real time. When implemented on control subjects, the test was able to capture statistically significant differences (p < 0.05) in control strategies when considering throughputs, path efficiencies and reaction times. Of particular note, we found statistically significant (p < 0.01) improvements in throughputs and path efficiencies with simultaneous PR when compared to direct control or sequential PR. Amputees could readily achieve the task; however a limited number of subjects was tested and a statistical analysis was not performed with that population.

  10. A sequential analysis of classroom discourse in Italian primary schools: the many faces of the IRF pattern.

    PubMed

    Molinari, Luisa; Mameli, Consuelo; Gnisci, Augusto

    2013-09-01

    A sequential analysis of classroom discourse is needed to investigate the conditions under which the triadic initiation-response-feedback (IRF) pattern may host different teaching orientations. The purpose of the study is twofold: first, to describe the characteristics of classroom discourse and, second, to identify and explore the different interactive sequences that can be captured with a sequential statistical analysis. Twelve whole-class activities were video recorded in three Italian primary schools. We observed classroom interaction as it occurs naturally on an everyday basis. In total, we collected 587 min of video recordings. Subsequently, 828 triadic IRF patterns were extracted from this material and analysed with the programme Generalized Sequential Query (GSEQ). The results indicate that classroom discourse may unfold in different ways. In particular, we identified and described four types of sequences. Dialogic sequences were triggered by authentic questions, and continued through further relaunches. Monologic sequences were directed to fulfil the teachers' pre-determined didactic purposes. Co-constructive sequences fostered deduction, reasoning, and thinking. Scaffolding sequences helped and sustained children with difficulties. The application of sequential analyses allowed us to show that interactive sequences may account for a variety of meanings, thus making a significant contribution to the literature and research practice in classroom discourse. © 2012 The British Psychological Society.

  11. Sequential strand displacement beacon for detection of DNA coverage on functionalized gold nanoparticles.

    PubMed

    Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris

    2014-06-17

    Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.

  12. Sequential growth for lifetime extension in biomimetic polypyrrole actuator systems

    NASA Astrophysics Data System (ADS)

    Sarrazin, J. C.; Mascaro, Stephen A.

    2015-04-01

    Electroactive polymers (EAPs) present prospective use in actuation and manipulation devices due to their low electrical activation requirements, biocompatibility, and mechanical performance. One of the main drawbacks with EAP actuators is a decrease in performance over extended periods of operation caused by over-oxidation of the polymer and general polymer degradation. Synthesis of the EAP material, polypyrrole with an embedded metal helix allows for sequential growth of the polymer during operation. The helical metal electrode acts as a scaffolding to support the polymer, and direct the 3-dimensional change in volume of the polymer along the axis of the helix during oxidative and reductive cycling. The metal helix also provides a working metal electrode through the entire length of the polymer actuator to distribute charge for actuation, as well as for sequential growth steps during the lifetime of operation of the polymer. This work demonstrates the method of sequential growth can be utilized after extended periods of use to partially restore electrical and mechanical performance of polypyrrole actuators. Since the actuation must be temporarily stopped to allow for a sequential growth cycle to be performed and reverse some of the polymer degradation, these actuator systems more closely mimic natural muscle in their analogous maintenance and repair.

  13. Proposed hardware architectures of particle filter for object tracking

    NASA Astrophysics Data System (ADS)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  14. Use of personalized Dynamic Treatment Regimes (DTRs) and Sequential Multiple Assignment Randomized Trials (SMARTs) in mental health studies

    PubMed Central

    Liu, Ying; ZENG, Donglin; WANG, Yuanjia

    2014-01-01

    Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116

  15. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  16. An Overview of the State of the Art in Atomistic and Multiscale Simulation of Fracture

    NASA Technical Reports Server (NTRS)

    Saether, Erik; Yamakov, Vesselin; Phillips, Dawn R.; Glaessgen, Edward H.

    2009-01-01

    The emerging field of nanomechanics is providing a new focus in the study of the mechanics of materials, particularly in simulating fundamental atomic mechanisms involved in the initiation and evolution of damage. Simulating fundamental material processes using first principles in physics strongly motivates the formulation of computational multiscale methods to link macroscopic failure to the underlying atomic processes from which all material behavior originates. This report gives an overview of the state of the art in applying concurrent and sequential multiscale methods to analyze damage and failure mechanisms across length scales.

  17. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    PubMed

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  18. Sequential sentinel SNP Regional Association Plots (SSS-RAP): an approach for testing independence of SNP association signals using meta-analysis data.

    PubMed

    Zheng, Jie; Gaunt, Tom R; Day, Ian N M

    2013-01-01

    Genome-Wide Association Studies (GWAS) frequently incorporate meta-analysis within their framework. However, conditional analysis of individual-level data, which is an established approach for fine mapping of causal sites, is often precluded where only group-level summary data are available for analysis. Here, we present a numerical and graphical approach, "sequential sentinel SNP regional association plot" (SSS-RAP), which estimates regression coefficients (beta) with their standard errors using the meta-analysis summary results directly. Under an additive model, typical for genes with small effect, the effect for a sentinel SNP can be transformed to the predicted effect for a possibly dependent SNP through a 2×2 2-SNP haplotypes table. The approach assumes Hardy-Weinberg equilibrium for test SNPs. SSS-RAP is available as a Web-tool (http://apps.biocompute.org.uk/sssrap/sssrap.cgi). To develop and illustrate SSS-RAP we analyzed lipid and ECG traits data from the British Women's Heart and Health Study (BWHHS), evaluated a meta-analysis for ECG trait and presented several simulations. We compared results with existing approaches such as model selection methods and conditional analysis. Generally findings were consistent. SSS-RAP represents a tool for testing independence of SNP association signals using meta-analysis data, and is also a convenient approach based on biological principles for fine mapping in group level summary data. © 2012 Blackwell Publishing Ltd/University College London.

  19. Formation of fivefold deformation twins in nanocrystalline face-centered-cubic copper based on molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, A. J.; Wei, Y. G.

    2006-07-24

    Fivefold deformation twins were reported recently to be observed in the experiment of the nanocrystalline face-centered-cubic metals and alloys. However, they were not predicted previously based on the molecular dynamics (MD) simulations and the reason was thought to be a uniaxial tension considered in the simulations. In the present investigation, through introducing pretwins in grain regions, using the MD simulations, the authors predict out the fivefold deformation twins in the grain regions of the nanocrystal grain cell, which undergoes a uniaxial tension. It is shown in their simulation results that series of Shockley partial dislocations emitted from grain boundaries providemore » sequential twining mechanism, which results in fivefold deformation twins.« less

  20. On extending parallelism to serial simulators

    NASA Technical Reports Server (NTRS)

    Nicol, David; Heidelberger, Philip

    1994-01-01

    This paper describes an approach to discrete event simulation modeling that appears to be effective for developing portable and efficient parallel execution of models of large distributed systems and communication networks. In this approach, the modeler develops submodels using an existing sequential simulation modeling tool, using the full expressive power of the tool. A set of modeling language extensions permit automatically synchronized communication between submodels; however, the automation requires that any such communication must take a nonzero amount off simulation time. Within this modeling paradigm, a variety of conservative synchronization protocols can transparently support conservative execution of submodels on potentially different processors. A specific implementation of this approach, U.P.S. (Utilitarian Parallel Simulator), is described, along with performance results on the Intel Paragon.

  1. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  2. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  3. Discrete filtering techniques applied to sequential GPS range measurements

    NASA Technical Reports Server (NTRS)

    Vangraas, Frank

    1987-01-01

    The basic navigation solution is described for position and velocity based on range and delta range (Doppler) measurements from NAVSTAR Global Positioning System satellites. The application of discrete filtering techniques is examined to reduce the white noise distortions on the sequential range measurements. A second order (position and velocity states) Kalman filter is implemented to obtain smoothed estimates of range by filtering the dynamics of the signal from each satellite separately. Test results using a simulated GPS receiver show a steady-state noise reduction, the input noise variance divided by the output noise variance, of a factor of four. Recommendations for further noise reduction based on higher order Kalman filters or additional delta range measurements are included.

  4. Parallel solution of closely coupled systems

    NASA Technical Reports Server (NTRS)

    Utku, S.; Salama, M.

    1986-01-01

    The odd-even permutation and associated unitary transformations for reordering the matrix coefficient A are employed as means of breaking the strong seriality which is characteristic of closely coupled systems. The nested dissection technique is also reviewed, and the equivalence between reordering A and dissecting its network is established. The effect of transforming A with odd-even permutation on its topology and the topology of its Cholesky factors is discussed. This leads to the construction of directed graphs showing the computational steps required for factoring A, their precedence relationships and their sequential and concurrent assignment to the available processors. Expressions for the speed-up and efficiency of using N processors in parallel relative to the sequential use of a single processor are derived from the directed graph. Similar expressions are also derived when the number of available processors is fewer than required.

  5. Structural insights into photocatalytic performance of carbon nitrides for degradation of organic pollutants

    NASA Astrophysics Data System (ADS)

    Oh, Junghoon; Shim, Yeonjun; Lee, Soomin; Park, Sunghee; Jang, Dawoon; Shin, Yunseok; Ohn, Saerom; Kim, Jeongho; Park, Sungjin

    2018-02-01

    Degradation of organic pollutants has a large environmental impact, with graphitic carbon nitride (g-C3N4) being a promising metal-free, low cost, and environment-friendly photocatalyst well suited for this purpose. Herein, we investigate the photocatalytic performance of g-C3N4-based materials and correlate it with their structural properties, using three different precursors (dicyandiamide, melamine, and urea) and two heating processes (direct heating at 550 °C and sequential heating at 300 and 550 °C) to produce the above photocatalysts. We further demonstrate that sequential heating produces photocatalysts with grain sizes and activities larger than those of the catalysts produced by direct heating and that the use of urea as a precursor affords photocatalysts with larger surface areas, allowing efficient rhodamine B degradation under visible light.

  6. Imaging sequential dehydrogenation of methanol on Cu(110) with a scanning tunneling microscope.

    PubMed

    Kitaguchi, Y; Shiotari, A; Okuyama, H; Hatta, S; Aruga, T

    2011-05-07

    Adsorption of methanol and its dehydrogenation on Cu(110) were studied by using a scanning tunneling microscope (STM). Upon adsorption at 12 K, methanol preferentially forms clusters on the surface. The STM could induce dehydrogenation of methanol sequentially to methoxy and formaldehyde. This enabled us to study the binding structures of these products in a single-molecule limit. Methoxy was imaged as a pair of protrusion and depression along the [001] direction. This feature is fully consistent with the previous result that it adsorbs on the short-bridge site with the C-O axis tilted along the [001] direction. The axis was induced to flip back and forth by vibrational excitations with the STM. Two configurations were observed for formaldehyde, whose structures were proposed based on their characteristic images and motions.

  7. Electrical conductivity of a monolayer produced by random sequential adsorption of linear k -mers onto a square lattice

    NASA Astrophysics Data System (ADS)

    Tarasevich, Yuri Yu.; Goltseva, Valeria A.; Laptev, Valeri V.; Lebovka, Nikolai I.

    2016-10-01

    The electrical conductivity of a monolayer produced by the random sequential adsorption (RSA) of linear k -mers (particles occupying k adjacent adsorption sites) onto a square lattice was studied by means of computer simulation. Overlapping with predeposited k -mers and detachment from the surface were forbidden. The RSA process continued until the saturation jamming limit, pj. The isotropic (equiprobable orientations of k -mers along x and y axes) and anisotropic (all k -mers aligned along the y axis) depositions for two different models—of an insulating substrate and conducting k -mers (C model) and of a conducting substrate and insulating k -mers (I model)—were examined. The Frank-Lobb algorithm was applied to calculate the electrical conductivity in both the x and y directions for different lengths (k =1 - 128) and concentrations (p =0 - pj) of the k -mers. The "intrinsic electrical conductivity" and concentration dependence of the relative electrical conductivity Σ (p ) (Σ =σ /σm for the C model and Σ =σm/σ for the I model, where σm is the electrical conductivity of substrate) in different directions were analyzed. At large values of k the Σ (p ) curves became very similar and they almost coincided at k =128 . Moreover, for both models the greater the length of the k -mers the smoother the functions Σx y(p ) ,Σx(p ) and Σy(p ) . For the more practically important C model, the other interesting findings are (i) for large values of k (k =64 ,128 ), the values of Σx y and Σy increase rapidly with the initial increase of p from 0 to 0.1; (ii) for k ≥16 , all the Σx y(p ) and Σx(p ) curves intersect with each other at the same isoconductivity points; (iii) for anisotropic deposition, the percolation concentrations are the same in the x and y directions, whereas, at the percolation point the greater the length of the k -mers the larger the anisotropy of the electrical conductivity, i.e., the ratio σy/σx (>1 ).

  8. Development of Cranial Bone Surrogate Structures Using Stereolithographic Additive Manufacturing

    DTIC Science & Technology

    2017-09-29

    shown in Fig. 5. With each cycle, a blade is passed across the platform to create a uniform layer of resin. The resin layer is exposed to a UV laser...due to the direction in which the layers are deposited. In both cases, the sequential layers run parallel to the loading direction of the tensile

  9. Comparing Parent-Child Interactions in the Clinic and at Home: An Exploration of the Validity of Clinical Behavior Observations Using Sequential Analysis

    ERIC Educational Resources Information Center

    Shriver, Mark D.; Frerichs, Lynae J.; Williams, Melissa; Lancaster, Blake M.

    2013-01-01

    Direct observation is often considered the "gold standard" for assessing the function, frequency, and intensity of problem behavior. Currently, the literature investigating the construct validity of direct observation conducted in the clinic setting reveals conflicting results. Previous studies on the construct validity of clinic-based…

  10. Observer Training Revisited: A Comparison of in Vivo and Video Instruction

    ERIC Educational Resources Information Center

    Dempsey, Carrie M.; Iwata, Brian A.; Fritz, Jennifer N.; Rolider, Natalie U.

    2012-01-01

    We compared the effects of 2 observer-training procedures. In vivo training involved practice during actual treatment sessions. Video training involved practice while watching progressively more complex simulations. Fifty-nine undergraduate students entered 1 of the 2 training conditions sequentially according to an ABABAB design. Results showed…

  11. Say again? How complexity and format of air traffic control instructions affect pilot recall

    DOT National Transportation Integrated Search

    1999-01-01

    This study compared the recall of ATC information presented in cither grouped or sequential format : in a part-task simulation. It also tested the effect of complexity of ATC clearances on recall, that is, : how many pieces of information a single tr...

  12. AN IN VITRO GASTROINTESTINAL METHOD TO ESTIMATE BIOAVAILABLE ARSENIC IN CONTAMINATED SOILS AND SOLID MEDIA. (R825410)

    EPA Science Inventory

    A method was developed to simulate the human gastrointestinal environment and
    to estimate bioavailability of arsenic in contaminated soil and solid media. In
    this in vitro gastrointestinal (IVG) method, arsenic is sequentially extracted
    from contaminated soil with ...

  13. Effect of Sequential Treatment with Bisphosphonates After Teriparatide in Ovariectomized Rats: A Direct Comparison Between Risedronate and Alendronate.

    PubMed

    Yano, Tetsuo; Yamada, Mei; Inoue, Daisuke

    2017-07-01

    Teriparatide (TPTD), a recombinant human parathyroid hormone N-terminal fragment (1-34), is a widely used bone anabolic drug for osteoporosis. Sequential treatment with antiresorptives such as bisphosphonates after TPTD discontinuation is generally recommended. However, relative effects of bisphosphonates have not been determined. In the present study, we directly compared effects of risedronate (RIS) and alendronate (ALN) on bone mineral density (BMD), bone turnover, structural property and strength in ovariectomized (OVX) rats, when administered after TPTD. Female Sprague Dawley rats were divided into one sham-operated and eight ovariectomized groups. TPTD, RIS, and ALN were given subcutaneously twice per week for 4 or 8 weeks after 4 week treatment with TPTD. TPTD significantly increased BMD (+9.6%) in OVX rats after 4 weeks of treatment. 8 weeks after TPTD withdrawal, vehicle-treated group showed a blunted BMD increase of +8.4% from the baseline. In contrast, 8 weeks of treatment with RIS and ALN significantly increased BMD to 17.4 and 21.8%, respectively. While ALN caused a consistently larger increase in BMD, sequential treatment with RIS resulted in lower Tb.Sp compared to ALN in the fourth lumbar vertebra as well as in greater stiffness in compression test. In conclusion, the present study demonstrated that sequential therapy with ALN and RIS after TPTD both improved bone mass and structure. Our results further suggest that RIS may have a greater effect on improving bone quality and stiffness than ALN despite less prominent effect on BMD. Further studies are necessary to determine clinical relevance of these findings to fracture rate.

  14. Persistence of opinion in the Sznajd consensus model: computer simulation

    NASA Astrophysics Data System (ADS)

    Stauffer, D.; de Oliveira, P. M. C.

    2002-12-01

    The density of never changed opinions during the Sznajd consensus-finding process decays with time t as 1/t^θ. We find θ simeq 3/8 for a chain, compatible with the exact Ising result of Derrida et al. In higher dimensions, however, the exponent differs from the Ising θ. With simultaneous updating of sublattices instead of the usual random sequential updating, the number of persistent opinions decays roughly exponentially. Some of the simulations used multi-spin coding.

  15. Class of cooperative stochastic models: Exact and approximate solutions, simulations, and experiments using ionic self-assembly of nanoparticles.

    PubMed

    Mazilu, I; Mazilu, D A; Melkerson, R E; Hall-Mejia, E; Beck, G J; Nshimyumukiza, S; da Fonseca, Carlos M

    2016-03-01

    We present exact and approximate results for a class of cooperative sequential adsorption models using matrix theory, mean-field theory, and computer simulations. We validate our models with two customized experiments using ionically self-assembled nanoparticles on glass slides. We also address the limitations of our models and their range of applicability. The exact results obtained using matrix theory can be applied to a variety of two-state systems with cooperative effects.

  16. SU-E-T-512: Electromagnetic Simulations of the Dielectric Wall Accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uselmann, A; Mackie, T

    Purpose: To characterize and parametrically study the key components of a dielectric wall accelerator through electromagnetic modeling and particle tracking. Methods: Electromagnetic and particle tracking simulations were performed using a commercial code (CST Microwave Studio, CST Inc.) utilizing the finite integration technique. A dielectric wall accelerator consists of a series of stacked transmission lines sequentially fired in synchrony with an ion pulse. Numerous properties of the stacked transmission lines, including geometric, material, and electronic properties, were analyzed and varied in order to assess their impact on the transverse and axial electric fields. Additionally, stacks of transmission lines were simulated inmore » order to quantify the parasitic effect observed in closely packed lines. Particle tracking simulations using the particle-in-cell method were performed on the various stacks to determine the impact of the above properties on the resultant phase space of the ions. Results: Examination of the simulation results show that novel geometries can shape the accelerating pulse in order to reduce the energy spread and increase the average energy of accelerated ions. Parasitic effects were quantified for various geometries and found to vary with distance from the end of the transmission line and along the beam axis. An optimal arrival time of an ion pulse relative to the triggering of the transmission lines for a given geometry was determined through parametric study. Benchmark simulations of single transmission lines agree well with published experimental results. Conclusion: This work characterized the behavior of the transmission lines used in a dielectric wall accelerator and used this information to improve them in novel ways. Utilizing novel geometries, we were able to improve the accelerating gradient and phase space of the accelerated particle bunch. Through simulation, we were able to discover and optimize design issues with the device at low cost. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC. Funding: Morgridge Institute for Research, Madison WI; Conflict of Interest: Dr. Mackie is an investor and board member at CPAC, a company developing compact accelerator designs similar to those discussed in this work, but designs discussed are not directed by CPAC.« less

  17. Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution

    PubMed Central

    Park, Yeonseok; Choi, Anthony

    2017-01-01

    The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. PMID:28946625

  18. Causal mediation analysis with a latent mediator.

    PubMed

    Albert, Jeffrey M; Geng, Cuiyu; Nelson, Suchitra

    2016-05-01

    Health researchers are often interested in assessing the direct effect of a treatment or exposure on an outcome variable, as well as its indirect (or mediation) effect through an intermediate variable (or mediator). For an outcome following a nonlinear model, the mediation formula may be used to estimate causally interpretable mediation effects. This method, like others, assumes that the mediator is observed. However, as is common in structural equations modeling, we may wish to consider a latent (unobserved) mediator. We follow a potential outcomes framework and assume a generalized structural equations model (GSEM). We provide maximum-likelihood estimation of GSEM parameters using an approximate Monte Carlo EM algorithm, coupled with a mediation formula approach to estimate natural direct and indirect effects. The method relies on an untestable sequential ignorability assumption; we assess robustness to this assumption by adapting a recently proposed method for sensitivity analysis. Simulation studies show good properties of the proposed estimators in plausible scenarios. Our method is applied to a study of the effect of mother education on occurrence of adolescent dental caries, in which we examine possible mediation through latent oral health behavior. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Dosimetric effects of patient rotational setup errors on prostate IMRT treatments

    NASA Astrophysics Data System (ADS)

    Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.

    2006-10-01

    The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.

  20. Challenges in predicting climate change impacts on pome fruit phenology

    NASA Astrophysics Data System (ADS)

    Darbyshire, Rebecca; Webb, Leanne; Goodwin, Ian; Barlow, E. W. R.

    2014-08-01

    Climate projection data were applied to two commonly used pome fruit flowering models to investigate potential differences in predicted full bloom timing. The two methods, fixed thermal time and sequential chill-growth, produced different results for seven apple and pear varieties at two Australian locations. The fixed thermal time model predicted incremental advancement of full bloom, while results were mixed from the sequential chill-growth model. To further investigate how the sequential chill-growth model reacts under climate perturbed conditions, four simulations were created to represent a wider range of species physiological requirements. These were applied to five Australian locations covering varied climates. Lengthening of the chill period and contraction of the growth period was common to most results. The relative dominance of the chill or growth component tended to predict whether full bloom advanced, remained similar or was delayed with climate warming. The simplistic structure of the fixed thermal time model and the exclusion of winter chill conditions in this method indicate it is unlikely to be suitable for projection analyses. The sequential chill-growth model includes greater complexity; however, reservations in using this model for impact analyses remain. The results demonstrate that appropriate representation of physiological processes is essential to adequately predict changes to full bloom under climate perturbed conditions with greater model development needed.

  1. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  2. Direct Synthesis of Medium-Bridged Twisted Amides via a Transannular Cyclization Strategy

    PubMed Central

    Szostak, Michal; Aubé, Jeffrey

    2009-01-01

    The sequential RCM to construct a challenging medium-sized ring followed by a transannular cyclization across a medium-sized ring delivers previously unattainable twisted amides from simple acyclic precursors. PMID:19708701

  3. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  4. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes.

    PubMed

    Karacan, C Özgen; Olea, Ricardo A

    2013-04-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.

  5. Dietary fibers from mushroom Sclerotia: 2. In vitro mineral binding capacity under sequential simulated physiological conditions of the human gastrointestinal tract.

    PubMed

    Wong, Ka-Hing; Cheung, Peter C K

    2005-11-30

    The in vitro mineral binding capacity of three novel dietary fibers (DFs) prepared from mushroom sclerotia, namely, Pleurotus tuber-regium, Polyporous rhinocerus, and Wolfiporia cocos, to Ca, Mg, Cu, Fe, and Zn under sequential simulated physiological conditions of the human stomach, small intestine, and colon was investigated and compared. Apart from releasing most of their endogenous Ca (ranged from 96.9 to 97.9% removal) and Mg (ranged from 95.9 to 96.7% removal), simulated physiological conditions of the stomach also attenuated the possible adverse binding effect of the three sclerotial DFs to the exogenous minerals by lowering their cation-exchange capacity (ranged from 20.8 to 32.3%) and removing a substantial amount of their potential mineral chelators including protein (ranged from 16.2 to 37.8%) and phytate (ranged from 58.5 to 64.2%). The in vitro mineral binding capacity of the three sclerotial DF under simulated physiological conditions of small intestine was found to be low, especially for Ca (ranged from 4.79 to 5.91% binding) and Mg (ranged from 3.16 to 4.18% binding), and was highly correlated (r > 0.97) with their residual protein contents. Under simulated physiological conditions of the colon with slightly acidic pH (5.80), only bound Ca was readily released (ranged from 34.2 to 72.3% releasing) from the three sclerotial DFs, and their potential enhancing effect on passive Ca absorption in the human large intestine was also discussed.

  6. Design of the biosonar simulator for dolphin's clicks waveform reproduction

    NASA Astrophysics Data System (ADS)

    Ishii, Ken; Akamatsu, Tomonari; Hatakeyama, Yoshimi

    1992-03-01

    The emitted clicks of Dall's porpoises consist of a pulse train of burst signals with an ultrasonic carrier frequency. The authors have designed a biosonar simulator to reproduce the waveforms associated with a dolphin's clicks underwater. The total reproduction system consists of a click signal acquisition block, a waveform analysis block, a memory unit, a click simulator, and a underwater, ultrasonic wave transmitter. In operation, data stored in an EPROM (Erasable Programmable Read Only Memory) are read out sequentially by a fast clock and converted to analog output signals. Then an ultrasonic power amplifier reproduces these signals through a transmitter. The click signal replaying block is referred to as the BSS (Biosonar Simulator). This is what simulates the clicks. The details of the BSS are described in this report. A unit waveform is defined. The waveform is divided into a burst period and a waiting period. Clicks are a sequence based on a unit waveform, and digital data are sequentially read out from an EPROM of waveform data. The basic parameters of the BSS are as follows: (1) reading clock, 100 ns to 25.4 microseconds; (2) number of reading clock, 34 to 1024 times; (3) counter clock in a waiting period, 100 ns to 25.4 microseconds; (4) number of counter clock, zero to 16,777,215 times; (5) number of burst/waiting repetition cycle, one to 128 times; and (6) transmission level adjustment by a programmable attenuator, zero to 86.5 dB. These basic functions enable the BSS to replay clicks of Dall's porpoise precisely.

  7. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes

    USGS Publications Warehouse

    Karacan, C. Özgen; Olea, Ricardo A.

    2013-01-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests.

  8. Sequential Gaussian co-simulation of rate decline parameters of longwall gob gas ventholes

    PubMed Central

    Karacan, C.Özgen; Olea, Ricardo A.

    2015-01-01

    Gob gas ventholes (GGVs) are used to control methane inflows into a longwall mining operation by capturing the gas within the overlying fractured strata before it enters the work environment. Using geostatistical co-simulation techniques, this paper maps the parameters of their rate decline behaviors across the study area, a longwall mine in the Northern Appalachian basin. Geostatistical gas-in-place (GIP) simulations were performed, using data from 64 exploration boreholes, and GIP data were mapped within the fractured zone of the study area. In addition, methane flowrates monitored from 10 GGVs were analyzed using decline curve analyses (DCA) techniques to determine parameters of decline rates. Surface elevation showed the most influence on methane production from GGVs and thus was used to investigate its relation with DCA parameters using correlation techniques on normal-scored data. Geostatistical analysis was pursued using sequential Gaussian co-simulation with surface elevation as the secondary variable and with DCA parameters as the primary variables. The primary DCA variables were effective percentage decline rate, rate at production start, rate at the beginning of forecast period, and production end duration. Co-simulation results were presented to visualize decline parameters at an area-wide scale. Wells located at lower elevations, i.e., at the bottom of valleys, tend to perform better in terms of their rate declines compared to those at higher elevations. These results were used to calculate drainage radii of GGVs using GIP realizations. The calculated drainage radii are close to ones predicted by pressure transient tests. PMID:26190930

  9. Regional-scale integration of hydrological and geophysical data using Bayesian sequential simulation: application to field data

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Gloaguen, Erwan; Holliger, Klaus

    2013-04-01

    Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches to the regional scale still represents a major challenge, yet is critically important for the development of groundwater flow and contaminant transport models. To address this issue, we have developed a regional-scale hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure. The objective is to simulate the regional-scale distribution of a hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, our approach first involves linking the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. We present the application of this methodology to a pertinent field scenario, where we consider collocated high-resolution measurements of the electrical conductivity, measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, estimated from EM flowmeter and slug test measurements, in combination with low-resolution exhaustive electrical conductivity estimates obtained from dipole-dipole ERT meausurements.

  10. Goal-Directed Decision Making with Spiking Neurons.

    PubMed

    Friedrich, Johannes; Lengyel, Máté

    2016-02-03

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. Copyright © 2016 the authors 0270-6474/16/361529-18$15.00/0.

  11. Goal-Directed Decision Making with Spiking Neurons

    PubMed Central

    Lengyel, Máté

    2016-01-01

    Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. SIGNIFICANCE STATEMENT Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules such that its dynamics devise a near-optimal plan of action. By systematically comparing our model results to experimental data, we show that it reproduces behavioral decision times and choice probabilities as well as neural responses in a rich set of tasks. Our results thus offer the first biologically realistic account for complex goal-directed decision making at a computational, algorithmic, and implementational level. PMID:26843636

  12. Development of a Platform for Simulating and Optimizing Thermoelectric Energy Systems

    NASA Astrophysics Data System (ADS)

    Kreuder, John J.

    Thermoelectrics are solid state devices that can convert thermal energy directly into electrical energy. They have historically been used only in niche applications because of their relatively low efficiencies. With the advent of nanotechnology and improved manufacturing processes thermoelectric materials have become less costly and more efficient As next generation thermoelectric materials become available there is a need for industries to quickly and cost effectively seek out feasible applications for thermoelectric heat recovery platforms. Determining the technical and economic feasibility of such systems requires a model that predicts performance at the system level. Current models focus on specific system applications or neglect the rest of the system altogether, focusing on only module design and not an entire energy system. To assist in screening and optimizing entire energy systems using thermoelectrics, a novel software tool, Thermoelectric Power System Simulator (TEPSS), is developed for system level simulation and optimization of heat recovery systems. The platform is designed for use with a generic energy system so that most types of thermoelectric heat recovery applications can be modeled. TEPSS is based on object-oriented programming in MATLABRTM. A modular, shell based architecture is developed to carry out concept generation, system simulation and optimization. Systems are defined according to the components and interconnectivity specified by the user. An iterative solution process based on Newton's Method is employed to determine the system's steady state so that an objective function representing the cost of the system can be evaluated at the operating point. An optimization algorithm from MATLAB's Optimization Toolbox uses sequential quadratic programming to minimize this objective function with respect to a set of user specified design variables and constraints. During this iterative process many independent system simulations are executed and the optimal operating condition of the system is determined. A comprehensive guide to using the software platform is included. TEPSS is intended to be expandable so that users can add new types of components and implement component models with an adequate degree of complexity for a required application. Special steps are taken to ensure that the system of nonlinear algebraic equations presented in the system engineering model is square and that all equations are independent. In addition, the third party program FluidProp is leveraged to allow for simulations of systems with a range of fluids. Sequential unconstrained minimization techniques are used to prevent physical variables like pressure and temperature from trending to infinity during optimization. Two case studies are performed to verify and demonstrate the simulation and optimization routines employed by TEPSS. The first is of a simple combined cycle in which the size of the heat exchanger and fuel rate are optimized. The second case study is the optimization of geometric parameters of a thermoelectric heat recovery platform in a regenerative Brayton Cycle. A basic package of components and interconnections are verified and provided as well.

  13. A simulation to study the feasibility of improving the temporal resolution of LAGEOS geodynamic solutions by using a sequential process noise filter

    NASA Technical Reports Server (NTRS)

    Hartman, Brian Davis

    1995-01-01

    A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal resolution of solutions obtained from standard sequential filtering methods and process noise sequential filtering methods shows that the accuracy is significantly improved using process noise. The results show that the positional accuracy of the orbit is improved as well. The temporal resolution of the resulting solutions are detailed, and conclusions drawn about the results. Benefits and drawbacks of using process noise filtering in this type of scenario are also identified.

  14. High resolution simulations of energy absorption in dynamically loaded cellular structures

    NASA Astrophysics Data System (ADS)

    Winter, R. E.; Cotton, M.; Harris, E. J.; Eakins, D. E.; McShane, G.

    2017-03-01

    Cellular materials have potential application as absorbers of energy generated by high velocity impact. CTH, a Sandia National Laboratories Code which allows very severe strains to be simulated, has been used to perform very high resolution simulations showing the dynamic crushing of a series of two-dimensional, stainless steel metal structures with varying architectures. The structures are positioned to provide a cushion between a solid stainless steel flyer plate with velocities ranging from 300 to 900 m/s, and an initially stationary stainless steel target. Each of the alternative architectures under consideration was formed by an array of identical cells each of which had a constant volume and a constant density. The resolution of the simulations was maximised by choosing a configuration in which one-dimensional conditions persisted for the full period over which the specimen densified, a condition which is most readily met by impacting high density specimens at high velocity. It was found that the total plastic flow and, therefore, the irreversible energy dissipated in the fully densified energy absorbing cell, increase (a) as the structure becomes more rodlike and less platelike and (b) as the impact velocity increases. Sequential CTH images of the deformation processes show that the flow of the cell material may be broadly divided into macroscopic flow perpendicular to the compression direction and jetting-type processes (microkinetic flow) which tend to predominate in rod and rodlike configurations and also tend to play an increasing role at increased strain rates. A very simple analysis of a configuration in which a solid flyer impacts a solid target provides a baseline against which to compare and explain features seen in the simulations. The work provides a basis for the development of energy absorbing structures for application in the 200-1000 m/s impact regime.

  15. Generalized trajectory surface-hopping method for internal conversion and intersystem crossing

    NASA Astrophysics Data System (ADS)

    Cui, Ganglong; Thiel, Walter

    2014-09-01

    Trajectory-based fewest-switches surface-hopping (FSSH) dynamics simulations have become a popular and reliable theoretical tool to simulate nonadiabatic photophysical and photochemical processes. Most available FSSH methods model internal conversion. We present a generalized trajectory surface-hopping (GTSH) method for simulating both internal conversion and intersystem crossing processes on an equal footing. We consider hops between adiabatic eigenstates of the non-relativistic electronic Hamiltonian (pure spin states), which is appropriate for sufficiently small spin-orbit coupling. This choice allows us to make maximum use of existing electronic structure programs and to minimize the changes to available implementations of the traditional FSSH method. The GTSH method is formulated within the quantum mechanics (QM)/molecular mechanics framework, but can of course also be applied at the pure QM level. The algorithm implemented in the GTSH code is specified step by step. As an initial GTSH application, we report simulations of the nonadiabatic processes in the lowest four electronic states (S0, S1, T1, and T2) of acrolein both in vacuo and in acetonitrile solution, in which the acrolein molecule is treated at the ab initio complete-active-space self-consistent-field level. These dynamics simulations provide detailed mechanistic insight by identifying and characterizing two nonadiabatic routes to the lowest triplet state, namely, direct S1 → T1 hopping as major pathway and sequential S1 → T2 → T1 hopping as minor pathway, with the T2 state acting as a relay state. They illustrate the potential of the GTSH approach to explore photoinduced processes in complex systems, in which intersystem crossing plays an important role.

  16. Modeling fine-scale geological heterogeneity--examples of sand lenses in tills.

    PubMed

    Kessler, Timo Christian; Comunian, Alessandro; Oriani, Fabio; Renard, Philippe; Nilsson, Bertel; Klint, Knud Erik; Bjerg, Poul Løgstrup

    2013-01-01

    Sand lenses at various spatial scales are recognized to add heterogeneity to glacial sediments. They have high hydraulic conductivities relative to the surrounding till matrix and may affect the advective transport of water and contaminants in clayey till settings. Sand lenses were investigated on till outcrops producing binary images of geological cross-sections capturing the size, shape and distribution of individual features. Sand lenses occur as elongated, anisotropic geobodies that vary in size and extent. Besides, sand lenses show strong non-stationary patterns on section images that hamper subsequent simulation. Transition probability (TP) and multiple-point statistics (MPS) were employed to simulate sand lens heterogeneity. We used one cross-section to parameterize the spatial correlation and a second, parallel section as a reference: it allowed testing the quality of the simulations as a function of the amount of conditioning data under realistic conditions. The performance of the simulations was evaluated on the faithful reproduction of the specific geological structure caused by sand lenses. Multiple-point statistics offer a better reproduction of sand lens geometry. However, two-dimensional training images acquired by outcrop mapping are of limited use to generate three-dimensional realizations with MPS. One can use a technique that consists in splitting the 3D domain into a set of slices in various directions that are sequentially simulated and reassembled into a 3D block. The identification of flow paths through a network of elongated sand lenses and the impact on the equivalent permeability in tills are essential to perform solute transport modeling in the low-permeability sediments. © 2012, The Author(s). Groundwater © 2012, National Ground Water Association.

  17. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Using Priced Options to Solve the Exposure Problem in Sequential Auctions

    NASA Astrophysics Data System (ADS)

    Mous, Lonneke; Robu, Valentin; La Poutré, Han

    This paper studies the benefits of using priced options for solving the exposure problem that bidders with valuation synergies face when participating in multiple, sequential auctions. We consider a model in which complementary-valued items are auctioned sequentially by different sellers, who have the choice of either selling their good directly or through a priced option, after fixing its exercise price. We analyze this model from a decision-theoretic perspective and we show, for a setting where the competition is formed by local bidders, that using options can increase the expected profit for both buyers and sellers. Furthermore, we derive the equations that provide minimum and maximum bounds between which a synergy buyer's bids should fall in order for both sides to have an incentive to use the options mechanism. Next, we perform an experimental analysis of a market in which multiple synergy bidders are active simultaneously.

  19. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems Among Youth.

    PubMed

    Chen, Diane; Drabick, Deborah A G; Burgers, Darcy E

    2015-12-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed.

  20. A Developmental Perspective on Peer Rejection, Deviant Peer Affiliation, and Conduct Problems among Youth

    PubMed Central

    Chen, Diane; Drabick, Deborah A. G.; Burgers, Darcy E.

    2015-01-01

    Peer rejection and deviant peer affiliation are linked consistently to the development and maintenance of conduct problems. Two proposed models may account for longitudinal relations among these peer processes and conduct problems: the (a) sequential mediation model, in which peer rejection in childhood and deviant peer affiliation in adolescence mediate the link between early externalizing behaviors and more serious adolescent conduct problems; and (b) parallel process model, in which peer rejection and deviant peer affiliation are considered independent processes that operate simultaneously to increment risk for conduct problems. In this review, we evaluate theoretical models and evidence for associations among conduct problems and (a) peer rejection and (b) deviant peer affiliation. We then consider support for the sequential mediation and parallel process models. Next, we propose an integrated model incorporating both the sequential mediation and parallel process models. Future research directions and implications for prevention and intervention efforts are discussed. PMID:25410430

  1. User's Guide of TOUGH2-EGS. A Coupled Geomechanical and Reactive Geochemical Simulator for Fluid and Heat Flow in Enhanced Geothermal Systems Version 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakcharoenphol, Perapon; Xiong, Yi; Hu, Litang

    TOUGH2-EGS is a numerical simulation program coupling geomechanics and chemical reactions for fluid and heat flows in porous media and fractured reservoirs of enhanced geothermal systems. The simulator includes the fully-coupled geomechanical (THM) module, the fully-coupled geochemical (THC) module, and the sequentially coupled reactive geochemistry (THMC) module. The fully-coupled flow-geomechanics model is developed from the linear elastic theory for the thermo-poro-elastic system and is formulated with the mean normal stress as well as pore pressure and temperature. The chemical reaction is sequentially coupled after solution of flow equations, which provides the flow velocity and phase saturation for the solute transportmore » calculation at each time step. In addition, reservoir rock properties, such as porosity and permeability, are subjected to change due to rock deformation and chemical reactions. The relationships between rock properties and geomechanical and chemical effects from poro-elasticity theories and empirical correlations are incorporated into the simulator. This report provides the user with detailed information on both mathematical models and instructions for using TOUGH2-EGS for THM, THC or THMC simulations. The mathematical models include the fluid and heat flow equations, geomechanical equation, reactive geochemistry equations, and discretization methods. Although TOUGH2-EGS has the capability for simulating fluid and heat flows coupled with both geomechanical and chemical effects, it is up to the users to select the specific coupling process, such as THM, THC, or THMC in a simulation. There are several example problems illustrating the applications of this program. These example problems are described in details and their input data are presented. The results demonstrate that this program can be used for field-scale geothermal reservoir simulation with fluid and heat flow, geomechanical effect, and chemical reaction in porous and fractured media.« less

  2. Spatiotemporal stochastic models for earth science and engineering applications

    NASA Astrophysics Data System (ADS)

    Luo, Xiaochun

    1998-12-01

    Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.

  3. Implementation of Temperature Sequential Controller on Variable Speed Drive

    NASA Astrophysics Data System (ADS)

    Cheong, Z. X.; Barsoum, N. N.

    2008-10-01

    There are many pump and motor installations with quite extensive speed variation, such as Sago conveyor, heating, ventilation and air conditioning (HVAC) and water pumping system. A common solution for these applications is to run several fixed speed motors in parallel, with flow control accomplish by turning the motors on and off. This type of control method causes high in-rush current, and adds a risk of damage caused by pressure transients. This paper explains the design and implementation of a temperature speed control system for use in industrial and commercial sectors. Advanced temperature speed control can be achieved by using ABB ACS800 variable speed drive-direct torque sequential control macro, programmable logic controller and temperature transmitter. The principle of direct torque sequential control macro (DTC-SC) is based on the control of torque and flux utilizing the stator flux field orientation over seven preset constant speed. As a result of continuous comparison of ambient temperature to the references temperatures; electromagnetic torque response is particularly fast to the motor state and it is able maintain constant speeds. Experimental tests have been carried out by using ABB ACS800-U1-0003-2, to validate the effectiveness and dynamic respond of ABB ACS800 against temperature variation, loads, and mechanical shocks.

  4. Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.

    PubMed

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D; Jeong, Jaeseung

    2014-08-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.

  5. Increased efficacy of photodynamic therapy via sequential targeting

    NASA Astrophysics Data System (ADS)

    Kessel, David; Aggarwal, Neha; Sloane, Bonnie F.

    2014-03-01

    Photokilling depends on the generation of death signals after photosensitized cells are irradiated. A variety of intracellular organelles can be targeted for photodamage, often with a high degree of specificity. We have discovered that a low level of photodamage directed against lysosomes can sensitize both a murine hepatoma cell line (in 2D culture) and an inflammatory breast cancer line of human origin (in a 3D model) to subsequent photodamage directed at mitochondria. Additional studies were carried out with hepatoma cells to explore possible mechanisms. The phototoxic effect of the `sequential targeting' approach was associated with an increased apoptotic response. The low level of lysosomal photodamage did not lead to any detectable migration of Fe++ from lysosomes to mitochondria or increased reactive oxygen species (ROS) formation after subsequent mitochondrial photodamage. Instead, there appears to be a signal generated that can amplify the pro-apoptotic effect of subsequent mitochondrial photodamage.

  6. Comprehension of Navigation Directions

    NASA Technical Reports Server (NTRS)

    Healy, Alice F.; Schneider, Vivian I.

    2002-01-01

    Subjects were shown navigation instructions varying in length directing them to move in a space represented by grids on a computer screen. They followed the instructions by clicking on the grids in the locations specified. Some subjects repeated back the instructions before following them, some did not, and others repeated back the instructions in reduced form, including only the critical words. The commands in each message were presented simultaneously for half of the subjects and sequentially for the others. For the longest messages, performance was better on the initial commands and worse on the final commands with simultaneous than with sequential presentation. Instruction repetition depressed performance, but reduced repetition removed this disadvantage. Effects of presentation format were attributed to visual scanning strategies. The advantage for reduced repetition was attributable either to enhanced visual scanning or to reduced output interference. A follow-up study with auditory presentation supported the visual scanning explanation.

  7. Sensor-Augmented Virtual Labs: Using Physical Interactions with Science Simulations to Promote Understanding of Gas Behavior

    ERIC Educational Resources Information Center

    Chao, Jie; Chiu, Jennifer L.; DeJaegher, Crystal J.; Pan, Edward A.

    2016-01-01

    Deep learning of science involves integration of existing knowledge and normative science concepts. Past research demonstrates that combining physical and virtual labs sequentially or side by side can take advantage of the unique affordances each provides for helping students learn science concepts. However, providing simultaneously connected…

  8. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  9. Landscape analysis software tools

    Treesearch

    Don Vandendriesche

    2008-01-01

    Recently, several new computer programs have been developed to assist in landscape analysis. The “Sequential Processing Routine for Arraying Yields” (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...

  10. Kriging for Simulation Metamodeling: Experimental Design, Reduced Rank Kriging, and Omni-Rank Kriging

    NASA Astrophysics Data System (ADS)

    Hosking, Michael Robert

    This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.

  11. Numerical simulation of transport and sequential biodegradation of chlorinated aliphatic hydrocarbons using CHAIN_2D

    NASA Astrophysics Data System (ADS)

    Schaerlaekens, J.; Mallants, D.; Imûnek, J.; van Genuchten, M. Th.; Feyen, J.

    1999-12-01

    Microbiological degradation of perchloroethylene (PCE) under anaerobic conditions follows a series of chain reactions, in which, sequentially, trichloroethylene (TCE), cis-dichloroethylene (c-DCE), vinylchloride (VC) and ethene are generated. First-order degradation rate constants, partitioning coefficients and mass exchange rates for PCE, TCE, c-DCE and VC were compiled from the literature. The parameters were used in a case study of pump-and-treat remediation of a PCE-contaminated site near Tilburg, The Netherlands. Transport, non-equilibrium sorption and biodegradation chain processes at the site were simulated using the CHAIN_2D code without further calibration. The modelled PCE compared reasonably well with observed PCE concentrations in the pumped water. We also performed a scenario analysis by applying several increased reductive dechlorination rates, reflecting different degradation conditions (e.g. addition of yeast extract and citrate). The scenario analysis predicted considerably higher concentrations of the degradation products as a result of enhanced reductive dechlorination of PCE. The predicted levels of the very toxic compound VC were now an order of magnitude above the maximum permissible concentration levels.

  12. Multisensor surveillance data augmentation and prediction with optical multipath signal processing

    NASA Astrophysics Data System (ADS)

    Bush, G. T., III

    1980-12-01

    The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.

  13. High-Fidelity Simulation for Advanced Cardiac Life Support Training

    PubMed Central

    Davis, Lindsay E.; Storjohann, Tara D.; Spiegel, Jacqueline J.; Beiber, Kellie M.

    2013-01-01

    Objective. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. Design. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). Assessment. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students’ knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. Conclusions. College curricula should incorporate simulation to complement but not replace lecture for ACLS education. PMID:23610477

  14. High-fidelity simulation for advanced cardiac life support training.

    PubMed

    Davis, Lindsay E; Storjohann, Tara D; Spiegel, Jacqueline J; Beiber, Kellie M; Barletta, Jeffrey F

    2013-04-12

    OBJECTIVE. To determine whether a high-fidelity simulation technique compared with lecture would produce greater improvement in advanced cardiac life support (ACLS) knowledge, confidence, and overall satisfaction with the training method. DESIGN. This sequential, parallel-group, crossover trial randomized students into 2 groups distinguished by the sequence of teaching technique delivered for ACLS instruction (ie, classroom lecture vs high-fidelity simulation exercise). ASSESSMENT. Test scores on a written examination administered at baseline and after each teaching technique improved significantly from baseline in all groups but were highest when lecture was followed by simulation. Simulation was associated with a greater degree of overall student satisfaction compared with lecture. Participation in a simulation exercise did not improve pharmacy students' knowledge of ACLS more than attending a lecture, but it was associated with improved student confidence in skills and satisfaction with learning and application. CONCLUSIONS. College curricula should incorporate simulation to complement but not replace lecture for ACLS education.

  15. An extended sequential goodness-of-fit multiple testing method for discrete data.

    PubMed

    Castro-Conde, Irene; Döhler, Sebastian; de Uña-Álvarez, Jacobo

    2017-10-01

    The sequential goodness-of-fit (SGoF) multiple testing method has recently been proposed as an alternative to the familywise error rate- and the false discovery rate-controlling procedures in high-dimensional problems. For discrete data, the SGoF method may be very conservative. In this paper, we introduce an alternative SGoF-type procedure that takes into account the discreteness of the test statistics. Like the original SGoF, our new method provides weak control of the false discovery rate/familywise error rate but attains false discovery rate levels closer to the desired nominal level, and thus it is more powerful. We study the performance of this method in a simulation study and illustrate its application to a real pharmacovigilance data set.

  16. Sequential deconvolution from wave-front sensing using bivariate simplex splines

    NASA Astrophysics Data System (ADS)

    Guo, Shiping; Zhang, Rongzhi; Li, Jisheng; Zou, Jianhua; Xu, Rong; Liu, Changhai

    2015-05-01

    Deconvolution from wave-front sensing (DWFS) is an imaging compensation technique for turbulence degraded images based on simultaneous recording of short exposure images and wave-front sensor data. This paper employs the multivariate splines method for the sequential DWFS: a bivariate simplex splines based average slopes measurement model is built firstly for Shack-Hartmann wave-front sensor; next, a well-conditioned least squares estimator for the spline coefficients is constructed using multiple Shack-Hartmann measurements; then, the distorted wave-front is uniquely determined by the estimated spline coefficients; the object image is finally obtained by non-blind deconvolution processing. Simulated experiments in different turbulence strength show that our method performs superior image restoration results and noise rejection capability especially when extracting the multidirectional phase derivatives.

  17. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging.

    PubMed

    Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A

    2016-04-01

    Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET-CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.

  18. astroABC : An Approximate Bayesian Computation Sequential Monte Carlo sampler for cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Jennings, E.; Madigan, M.

    2017-04-01

    Given the complexity of modern cosmological parameter inference where we are faced with non-Gaussian data and noise, correlated systematics and multi-probe correlated datasets,the Approximate Bayesian Computation (ABC) method is a promising alternative to traditional Markov Chain Monte Carlo approaches in the case where the Likelihood is intractable or unknown. The ABC method is called "Likelihood free" as it avoids explicit evaluation of the Likelihood by using a forward model simulation of the data which can include systematics. We introduce astroABC, an open source ABC Sequential Monte Carlo (SMC) sampler for parameter estimation. A key challenge in astrophysics is the efficient use of large multi-probe datasets to constrain high dimensional, possibly correlated parameter spaces. With this in mind astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. A key new feature of astroABC is the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available. Other key features of this new sampler include: a Sequential Monte Carlo sampler; a method for iteratively adapting tolerance levels; local covariance estimate using scikit-learn's KDTree; modules for specifying optimal covariance matrix for a component-wise or multivariate normal perturbation kernel and a weighted covariance metric; restart files output frequently so an interrupted sampling run can be resumed at any iteration; output and restart files are backed up at every iteration; user defined distance metric and simulation methods; a module for specifying heterogeneous parameter priors including non-standard prior PDFs; a module for specifying a constant, linear, log or exponential tolerance level; well-documented examples and sample scripts. This code is hosted online at https://github.com/EliseJ/astroABC.

  19. A Sequential Ensemble Prediction System at Convection Permitting Scales

    NASA Astrophysics Data System (ADS)

    Milan, M.; Simmer, C.

    2012-04-01

    A Sequential Assimilation Method (SAM) following some aspects of particle filtering with resampling, also called SIR (Sequential Importance Resampling), is introduced and applied in the framework of an Ensemble Prediction System (EPS) for weather forecasting on convection permitting scales, with focus to precipitation forecast. At this scale and beyond, the atmosphere increasingly exhibits chaotic behaviour and non linear state space evolution due to convectively driven processes. One way to take full account of non linear state developments are particle filter methods, their basic idea is the representation of the model probability density function by a number of ensemble members weighted by their likelihood with the observations. In particular particle filter with resampling abandons ensemble members (particles) with low weights restoring the original number of particles adding multiple copies of the members with high weights. In our SIR-like implementation we substitute the likelihood way to define weights and introduce a metric which quantifies the "distance" between the observed atmospheric state and the states simulated by the ensemble members. We also introduce a methodology to counteract filter degeneracy, i.e. the collapse of the simulated state space. To this goal we propose a combination of resampling taking account of simulated state space clustering and nudging. By keeping cluster representatives during resampling and filtering, the method maintains the potential for non linear system state development. We assume that a particle cluster with initially low likelihood may evolve in a state space with higher likelihood in a subsequent filter time thus mimicking non linear system state developments (e.g. sudden convection initiation) and remedies timing errors for convection due to model errors and/or imperfect initial condition. We apply a simplified version of the resampling, the particles with highest weights in each cluster are duplicated; for the model evolution for each particle pair one particle evolves using the forward model; the second particle, however, is nudged to the radar and satellite observation during its evolution based on the forward model.

  20. Development of an efficient genetic manipulation strategy for sequential gene disruption and expression of different heterologous GFP genes in Candida tropicalis.

    PubMed

    Zhang, Lihua; Chen, Xianzhong; Chen, Zhen; Wang, Zezheng; Jiang, Shan; Li, Li; Pötter, Markus; Shen, Wei; Fan, You

    2016-11-01

    The diploid yeast Candida tropicalis, which can utilize n-alkane as a carbon and energy source, is an attractive strain for both physiological studies and practical applications. However, it presents some characteristics, such as rare codon usage, difficulty in sequential gene disruption, and inefficiency in foreign gene expression, that hamper strain improvement through genetic engineering. In this work, we present a simple and effective method for sequential gene disruption in C. tropicalis based on the use of an auxotrophic mutant host defective in orotidine monophosphate decarboxylase (URA3). The disruption cassette, which consists of a functional yeast URA3 gene flanked by a 0.3 kb gene disruption auxiliary sequence (gda) direct repeat derived from downstream or upstream of the URA3 gene and of homologous arms of the target gene, was constructed and introduced into the yeast genome by integrative transformation. Stable integrants were isolated by selection for Ura + and identified by PCR and sequencing. The important feature of this construct, which makes it very attractive, is that recombination between the flanking direct gda repeats occurs at a high frequency (10 -8 ) during mitosis. After excision of the URA3 marker, only one copy of the gda sequence remains at the recombinant locus. Thus, the resulting ura3 strain can be used again to disrupt a second allelic gene in a similar manner. In addition to this effective sequential gene disruption method, a codon-optimized green fluorescent protein-encoding gene (GFP) was functionally expressed in C. tropicalis. Thus, we propose a simple and reliable method to improve C. tropicalis by genetic manipulation.

  1. Effects of Injected CO2 on Geomechanical Properties Due to Mineralogical Changes

    NASA Astrophysics Data System (ADS)

    Nguyen, B. N.; Hou, Z.; Bacon, D. H.; Murray, C. J.; White, J. A.

    2013-12-01

    Long-term injection and storage of CO2 in deep underground reservoirs may significantly modify the geomechanical behavior of rocks since CO2 can react with the constituent phases of reservoir rocks and modify their composition. This can lead to modifications of their geomechanical properties (i.e., elastic moduli, Biot's coefficients, and permeability). Modifications of rock geomechanical properties have important consequences as these directly control stress and strain distributions, affect conditions for fracture initiation and development and/or fault healing. This paper attempts to elucidate the geochemical effects of CO2 on geomechanical properties of typical reservoir rocks by means of numerical analyses using the STOMP-ABAQUS sequentially coupled simulator that includes the capability to handle geomechanics and the reactive transport of CO2 together with a module (EMTA) to compute the homogenized rock poroelastic properties as a function of composition changes. EMTA, a software module developed at PNNL, implements the standard and advanced Eshelby-Mori-Tanaka approaches to compute the thermoelastic properties of composite materials. In this work, EMTA will be implemented in the coupled STOMP-ABAQUS simulator as a user subroutine of ABAQUS and used to compute local elastic stiffness based on rock composition. Under the STOMP-ABAQUS approach, STOMP models are built to simulate aqueous and CO2 multiphase fluid flows, and relevant chemical reactions of pore fluids with minerals in the reservoirs. The ABAQUS models then read STOMP output data for cell center coordinates, gas pressures, aqueous pressures, temperatures, saturations, constituent volume fractions, as well as permeability and porosity that are affected by chemical reactions. These data are imported into ABAQUS meshes using a mapping procedure developed for the exchange of data between STOMP and ABAQUS. Constitutive models implemented in ABAQUS via user subroutines then compute stiffness, stresses, strains, pore pressure, permeability, porosity, and capillary pressure, and return updated permeability, porosity, and capillary pressure to STOMP at selected times. In preliminary work, the enhanced STOMP-ABAQUS sequentially coupled approach is validated and illustrated in an example analysis of a cylindrical rock specimen subjected to axial loading, confining pressure, and CO2 fluid injection. The geomechanical analysis accounting for CO2 reactions with rock constituents is compared to that without chemical reactions to elucidate the geochemical effects of injected CO2 on the response of the reservoir rock to stress.

  2. Multigrid Methods for Fully Implicit Oil Reservoir Simulation

    NASA Technical Reports Server (NTRS)

    Molenaar, J.

    1996-01-01

    In this paper we consider the simultaneous flow of oil and water in reservoir rock. This displacement process is modeled by two basic equations: the material balance or continuity equations and the equation of motion (Darcy's law). For the numerical solution of this system of nonlinear partial differential equations there are two approaches: the fully implicit or simultaneous solution method and the sequential solution method. In the sequential solution method the system of partial differential equations is manipulated to give an elliptic pressure equation and a hyperbolic (or parabolic) saturation equation. In the IMPES approach the pressure equation is first solved, using values for the saturation from the previous time level. Next the saturations are updated by some explicit time stepping method; this implies that the method is only conditionally stable. For the numerical solution of the linear, elliptic pressure equation multigrid methods have become an accepted technique. On the other hand, the fully implicit method is unconditionally stable, but it has the disadvantage that in every time step a large system of nonlinear algebraic equations has to be solved. The most time-consuming part of any fully implicit reservoir simulator is the solution of this large system of equations. Usually this is done by Newton's method. The resulting systems of linear equations are then either solved by a direct method or by some conjugate gradient type method. In this paper we consider the possibility of applying multigrid methods for the iterative solution of the systems of nonlinear equations. There are two ways of using multigrid for this job: either we use a nonlinear multigrid method or we use a linear multigrid method to deal with the linear systems that arise in Newton's method. So far only a few authors have reported on the use of multigrid methods for fully implicit simulations. Two-level FAS algorithm is presented for the black-oil equations, and linear multigrid for two-phase flow problems with strong heterogeneities and anisotropies is studied. Here we consider both possibilities. Moreover we present a novel way for constructing the coarse grid correction operator in linear multigrid algorithms. This approach has the advantage in that it preserves the sparsity pattern of the fine grid matrix and it can be extended to systems of equations in a straightforward manner. We compare the linear and nonlinear multigrid algorithms by means of a numerical experiment.

  3. Shale Frac Sequential Flowback Analyses and Reuse Implications, March 30, 2011

    EPA Pesticide Factsheets

    Water re-use challenges and solutions have direct and indirect influences in the design of hydraulic fracturing fluid systems and products used in High Volume, High Rate (HVHR) hydraulic fracturing of shale wells (1,2).

  4. DET/MPS - THE GSFC ENERGY BALANCE PROGRAM, DIRECT ENERGY TRANSFER/MULTIMISSION SPACECRAFT MODULAR POWER SYSTEM (DEC VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1994-01-01

    The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.

  5. DET/MPS - THE GSFC ENERGY BALANCE PROGRAM, DIRECT ENERGY TRANSFER/MULTIMISSION SPACECRAFT MODULAR POWER SYSTEM (UNIX VERSION)

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1994-01-01

    The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.

  6. DET/MPS - THE GSFC ENERGY BALANCE PROGRAM, DIRECT ENERGY TRANSFER/MULTIMISSION SPACECRAFT MODULAR POWER SYSTEM (MACINTOSH A/UX VERSION)

    NASA Technical Reports Server (NTRS)

    Jagielski, J. M.

    1994-01-01

    The DET/MPS programs model and simulate the Direct Energy Transfer and Multimission Spacecraft Modular Power System in order to aid both in design and in analysis of orbital energy balance. Typically, the DET power system has the solar array directly to the spacecraft bus, and the central building block of MPS is the Standard Power Regulator Unit. DET/MPS allows a minute-by-minute simulation of the power system's performance as it responds to various orbital parameters, focusing its output on solar array output and battery characteristics. While this package is limited in terms of orbital mechanics, it is sufficient to calculate eclipse and solar array data for circular or non-circular orbits. DET/MPS can be adjusted to run one or sequential orbits up to about one week, simulated time. These programs have been used on a variety of Goddard Space Flight Center spacecraft projects. DET/MPS is written in FORTRAN 77 with some VAX-type extensions. Any FORTRAN 77 compiler that includes VAX extensions should be able to compile and run the program with little or no modifications. The compiler must at least support free-form (or tab-delineated) source format and 'do do-while end-do' control structures. DET/MPS is available for three platforms: GSC-13374, for DEC VAX series computers running VMS, is available in DEC VAX Backup format on a 9-track 1600 BPI tape (standard distribution) or TK50 tape cartridge; GSC-13443, for UNIX-based computers, is available on a .25 inch streaming magnetic tape cartridge in UNIX tar format; and GSC-13444, for Macintosh computers running AU/X with either the NKR FORTRAN or AbSoft MacFORTRAN II compilers, is available on a 3.5 inch 800K Macintosh format diskette. Source code and test data are supplied. The UNIX version of DET requires 90K of main memory for execution. DET/MPS was developed in 1990. A/UX and Macintosh are registered trademarks of Apple Computer, Inc. VMS, DEC VAX and TK50 are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories.

  7. Cost-effectiveness of the sequential application of tyrosine kinase inhibitors for the treatment of chronic myeloid leukemia.

    PubMed

    Rochau, Ursula; Sroczynski, Gaby; Wolf, Dominik; Schmidt, Stefan; Jahn, Beate; Kluibenschaedl, Martina; Conrads-Frank, Annette; Stenehjem, David; Brixner, Diana; Radich, Jerald; Gastl, Günther; Siebert, Uwe

    2015-01-01

    Several tyrosine kinase inhibitors (TKIs) are approved for chronic myeloid leukemia (CML) therapy. We evaluated the long-term cost-effectiveness of seven sequential therapy regimens for CML in Austria. A cost-effectiveness analysis was performed using a state-transition Markov model. As model parameters, we used published trial data, clinical, epidemiological and economic data from the Austrian CML registry and national databases. We performed a cohort simulation over a life-long time-horizon from a societal perspective. Nilotinib without second-line TKI yielded an incremental cost-utility ratio of 121,400 €/quality-adjusted life year (QALY) compared to imatinib without second-line TKI after imatinib failure. Imatinib followed by nilotinib after failure resulted in 131,100 €/QALY compared to nilotinib without second-line TKI. Nilotinib followed by dasatinib yielded 152,400 €/QALY compared to imatinib followed by nilotinib after failure. Remaining strategies were dominated. The sequential application of TKIs is standard-of-care, and thus, our analysis points toward imatinib followed by nilotinib as the most cost-effective strategy.

  8. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  9. Flexible sequential designs for multi-arm clinical trials.

    PubMed

    Magirr, D; Stallard, N; Jaki, T

    2014-08-30

    Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Geophysical monitoring of solute transport in dual-domain environments through laboratory experiments, field-scale solute tracer tests, and numerical simulation

    NASA Astrophysics Data System (ADS)

    Swanson, Ryan David

    The advection-dispersion equation (ADE) fails to describe non-Fickian solute transport breakthrough curves (BTCs) in saturated porous media in both laboratory and field experiments, necessitating the use of other models. The dual-domain mass transfer (DDMT) model partitions the total porosity into mobile and less-mobile domains with an exchange of mass between the two domains, and this model can reproduce better fits to BTCs in many systems than ADE-based models. However, direct experimental estimation of DDMT model parameters remains elusive and model parameters are often calculated a posteriori by an optimization procedure. Here, we investigate the use of geophysical tools (direct-current resistivity, nuclear magnetic resonance, and complex conductivity) to estimate these model parameters directly. We use two different samples of the zeolite clinoptilolite, a material shown to demonstrate solute mass transfer due to a significant internal porosity, and provide the first evidence that direct-current electrical methods can track solute movement into and out of a less-mobile pore space in controlled laboratory experiments. We quantify the effects of assuming single-rate DDMT for multirate mass transfer systems. We analyze pore structures using material characterization methods (mercury porosimetry, scanning electron microscopy, and X-ray computer tomography), and compare these observations to geophysical measurements. Nuclear magnetic resonance in conjunction with direct-current resistivity measurements can constrain mobile and less-mobile porosities, but complex conductivity may have little value in relation to mass transfer despite the hypothesis that mass transfer and complex conductivity lengths scales are related. Finally, we conduct a geoelectrical monitored tracer test at the Macrodispersion Experiment (MADE) site in Columbus, MS. We relate hydraulic and electrical conductivity measurements to generate a 3D hydraulic conductivity field, and compare to hydraulic conductivity fields estimated through ordinary kriging and sequential Gaussian simulation. Time-lapse electrical measurements are used to verify or dismiss aspects of breakthrough curves for different hydraulic conductivity fields. Our results quantify the potential for geophysical measurements to infer on single-rate DDMT parameters, show site-specific relations between hydraulic and electrical conductivity, and track solute exchange into and out of less-mobile domains.

  11. Rotation-Induced Macromolecular Spooling of DNA

    NASA Astrophysics Data System (ADS)

    Shendruk, Tyler N.; Sean, David; Berard, Daniel J.; Wolf, Julian; Dragoman, Justin; Battat, Sophie; Slater, Gary W.; Leslie, Sabrina R.

    2017-07-01

    Genetic information is stored in a linear sequence of base pairs; however, thermal fluctuations and complex DNA conformations such as folds and loops make it challenging to order genomic material for in vitro analysis. In this work, we discover that rotation-induced macromolecular spooling of DNA around a rotating microwire can monotonically order genomic bases, overcoming this challenge. We use single-molecule fluorescence microscopy to directly visualize long DNA strands deforming and elongating in shear flow near a rotating microwire, in agreement with numerical simulations. While untethered DNA is observed to elongate substantially, in agreement with our theory and numerical simulations, strong extension of DNA becomes possible by introducing tethering. For the case of tethered polymers, we show that increasing the rotation rate can deterministically spool a substantial portion of the chain into a fully stretched, single-file conformation. When applied to DNA, the fraction of genetic information sequentially ordered on the microwire surface will increase with the contour length, despite the increased entropy. This ability to handle long strands of DNA is in contrast to modern DNA sample preparation technologies for sequencing and mapping, which are typically restricted to comparatively short strands, resulting in challenges in reconstructing the genome. Thus, in addition to discovering new rotation-induced macromolecular dynamics, this work inspires new approaches to handling genomic-length DNA strands.

  12. Predicting the performance uncertainty of a 1-MW pilot-scale carbon capture system after hierarchical laboratory-scale calibration and validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Zhijie; Lai, Canhai; Marcy, Peter William

    2017-05-01

    A challenging problem in designing pilot-scale carbon capture systems is to predict, with uncertainty, the adsorber performance and capture efficiency under various operating conditions where no direct experimental data exist. Motivated by this challenge, we previously proposed a hierarchical framework in which relevant parameters of physical models were sequentially calibrated from different laboratory-scale carbon capture unit (C2U) experiments. Specifically, three models of increasing complexity were identified based on the fundamental physical and chemical processes of the sorbent-based carbon capture technology. Results from the corresponding laboratory experiments were used to statistically calibrate the physical model parameters while quantifying some of theirmore » inherent uncertainty. The parameter distributions obtained from laboratory-scale C2U calibration runs are used in this study to facilitate prediction at a larger scale where no corresponding experimental results are available. In this paper, we first describe the multiphase reactive flow model for a sorbent-based 1-MW carbon capture system then analyze results from an ensemble of simulations with the upscaled model. The simulation results are used to quantify uncertainty regarding the design’s predicted efficiency in carbon capture. In particular, we determine the minimum gas flow rate necessary to achieve 90% capture efficiency with 95% confidence.« less

  13. Polymeric assay film for direct colorimetric detection

    DOEpatents

    Charych, Deborah; Nagy, Jon; Spevak, Wayne

    2002-01-01

    A lipid bilayer with affinity to an analyte, which directly signals binding by a changes in the light absorption spectra. This novel assay means and method has special applications in the drug development and medical testing fields. Using a spectrometer, the system is easily automated, and a multiple well embodiment allows inexpensive screening and sequential testing. This invention also has applications in industry for feedstock and effluent monitoring.

  14. Polymeric assay film for direct colorimetric detection

    DOEpatents

    Charych, Deborah; Nagy, Jon; Spevak, Wayne

    1999-01-01

    A lipid bilayer with affinity to an analyte, which directly signals binding by a changes in the light absorption spectra. This novel assay means and method has special applications in the drug development and medical testing fields. Using a spectrometer, the system is easily automated, and a multiple well embodiment allows inexpensive screening and sequential testing. This invention also has applications in industry for feedstock and effluent monitoring.

  15. Evading the strength–ductility trade-off dilemma in steel through gradient hierarchical nanotwins

    PubMed Central

    Wei, Yujie; Li, Yongqiang; Zhu, Lianchun; Liu, Yao; Lei, Xianqi; Wang, Gang; Wu, Yanxin; Mi, Zhenli; Liu, Jiabin; Wang, Hongtao; Gao, Huajian

    2014-01-01

    The strength–ductility trade-off has been a long-standing dilemma in materials science. This has limited the potential of many structural materials, steels in particular. Here we report a way of enhancing the strength of twinning-induced plasticity steel at no ductility trade-off. After applying torsion to cylindrical twinning-induced plasticity steel samples to generate a gradient nanotwinned structure along the radial direction, we find that the yielding strength of the material can be doubled at no reduction in ductility. It is shown that this evasion of strength–ductility trade-off is due to the formation of a gradient hierarchical nanotwinned structure during pre-torsion and subsequent tensile deformation. A series of finite element simulations based on crystal plasticity are performed to understand why the gradient twin structure can cause strengthening and ductility retention, and how sequential torsion and tension lead to the observed hierarchical nanotwinned structure through activation of different twinning systems. PMID:24686581

  16. Design optimization of the S-frame to improve crashworthiness

    NASA Astrophysics Data System (ADS)

    Liu, Shu-Tian; Tong, Ze-Qi; Tang, Zhi-Liang; Zhang, Zong-Hua

    2014-08-01

    In this paper, the S-frames, the front side rail structures of automobile, were investigated for crashworthiness. Various cross-sections including regular polygon, non-convex polygon and multi-cell with inner stiffener sections were investigated in terms of energy absorption of S-frames. It was determined through extensive numerical simulation that a multi-cell S-frame with double vertical internal stiffeners can absorb more energy than the other configurations. Shape optimization was also carried out to improve energy absorption of the S-frame with a rectangular section. The center composite design of experiment and the sequential response surface method (SRSM) were adopted to construct the approximate design sub-problem, which was then solved by the feasible direction method. An innovative double S-frame was obtained from the optimal result. The optimum configuration of the S-frame was crushed numerically and more plastic hinges as well as shear zones were observed during the crush process. The energy absorption efficiency of the structure with the optimal configuration was improved compared to the initial configuration.

  17. Program Predicts Time Courses of Human/Computer Interactions

    NASA Technical Reports Server (NTRS)

    Vera, Alonso; Howes, Andrew

    2005-01-01

    CPM X is a computer program that predicts sequences of, and amounts of time taken by, routine actions performed by a skilled person performing a task. Unlike programs that simulate the interaction of the person with the task environment, CPM X predicts the time course of events as consequences of encoded constraints on human behavior. The constraints determine which cognitive and environmental processes can occur simultaneously and which have sequential dependencies. The input to CPM X comprises (1) a description of a task and strategy in a hierarchical description language and (2) a description of architectural constraints in the form of rules governing interactions of fundamental cognitive, perceptual, and motor operations. The output of CPM X is a Program Evaluation Review Technique (PERT) chart that presents a schedule of predicted cognitive, motor, and perceptual operators interacting with a task environment. The CPM X program allows direct, a priori prediction of skilled user performance on complex human-machine systems, providing a way to assess critical interfaces before they are deployed in mission contexts.

  18. Ultrafast Imaging of Surface Plasmons Propagating on a Gold Surface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Yu; Joly, Alan G.; Hu, Dehong

    2015-05-13

    We record time-resolved nonlinear photoemission electron microscopy (tr-PEEM) images of propagating surface plasmons (PSPs) launched from a lithographically patterned rectangular trench on a flat gold surface. Our tr-PEEM scheme involves a pair of identical, spatially separated, and interferometrically-locked femtosecond laser pulses. Power dependent PEEM images provide experimental evidence for a sequential coherent nonlinear photoemission process, in which one laser source creates a PSP polarization state through a linear interaction, and the second subsequently probes the prepared state via two photon photoemission. The recorded time-resolved movies of a PSP allow us to directly measure various properties of the surface-bound wave packet,more » including its carrier wavelength (785 nm) and group velocity (0.95c). In addition, tr-PEEM in concert with finite-difference time domain simulations together allow us to set a lower limit of 75 μm for the decay length of the PSP on a 100 nm thick gold film.« less

  19. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  20. Multi-Target State Extraction for the SMC-PHD Filter

    PubMed Central

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-01-01

    The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274

  1. EMC3-EIRENE modelling of toroidally-localized divertor gas injection experiments on Alcator C-Mod

    DOE PAGES

    Lore, Jeremy D.; Reinke, M. L.; LaBombard, Brian; ...

    2014-09-30

    Experiments on Alcator C-Mod with toroidally and poloidally localized divertor nitrogen injection have been modeled using the three-dimensional edge transport code EMC3-EIRENE to elucidate the mechanisms driving measured toroidal asymmetries. In these experiments five toroidally distributed gas injectors in the private flux region were sequentially activated in separate discharges resulting in clear evidence of toroidal asymmetries in radiated power and nitrogen line emission as well as a ~50% toroidal modulation in electron pressure at the divertor target. The pressure modulation is qualitatively reproduced by the modelling, with the simulation yielding a toroidal asymmetry in the heat flow to the outermore » strike point. Finally, toroidal variation in impurity line emission is qualitatively matched in the scrape-off layer above the strike point, however kinetic corrections and cross-field drifts are likely required to quantitatively reproduce impurity behavior in the private flux region and electron temperatures and densities directly in front of the target.« less

  2. Speed of sound estimation for dual-stage virtual source ultrasound beamforming using point scatterers

    NASA Astrophysics Data System (ADS)

    Ma, Manyou; Rohling, Robert; Lampe, Lutz

    2017-03-01

    Synthetic transmit aperture beamforming is an increasingly used method to improve resolution in biomedical ultrasound imaging. Synthetic aperture sequential beamforming (SASB) is an implementation of this concept which features a relatively low computation complexity. Moreover, it can be implemented in a dual-stage architecture, where the first stage only applies simple single receive-focused delay-and-sum (srDAS) operations, while the second, more complex stage is performed either locally or remotely using more powerful processing. However, like traditional DAS-based beamforming methods, SASB is susceptible to inaccurate speed-of-sound (SOS) information. In this paper, we show how SOS estimation can be implemented using the srDAS beamformed image, and integrated into the dual-stage implementation of SASB, in an effort to obtain high resolution images with relatively low-cost hardware. Our approach builds on an existing per-channel radio frequency data-based direct estimation method, and applies an iterative refinement of the estimate. We use this estimate for SOS compensation, without the need to repeat the first stage beamforming. The proposed and previous methods are tested on both simulation and experimental studies. The accuracy of our SOS estimation method is on average 0.38% in simulation studies and 0.55% in phantom experiments, when the underlying SOS in the media is within the range 1450-1620 m/s. Using the estimated SOS, the beamforming lateral resolution of SASB is improved on average 52.6% in simulation studies and 50.0% in phantom experiments.

  3. Error in telemetry studies: Effects of animal movement on triangulation

    USGS Publications Warehouse

    Schmutz, Joel A.; White, Gary C.

    1990-01-01

    We used Monte Carlo simulations to investigate the effects of animal movement on error of estimated animal locations derived from radio-telemetry triangulation of sequentially obtained bearings. Simulated movements of 0-534 m resulted in up to 10-fold increases in average location error but <10% decreases in location precision when observer-to-animal distances were <1,000 m. Location error and precision were minimally affected by censorship of poor locations with Chi-square goodness-of-fit tests. Location error caused by animal movement can only be eliminated by taking simultaneous bearings.

  4. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  5. Isobaric yield ratio difference between the 140 A MeV 58Ni + 9Be and 64Ni +9Be reactions studied by the antisymmetric molecular dynamics model

    NASA Astrophysics Data System (ADS)

    Qiao, C. Y.; Wei, H. L.; Ma, C. W.; Zhang, Y. L.; Wang, S. S.

    2015-07-01

    Background: The isobaric yield ratio difference (IBD) method is found to be sensitive to the density difference of neutron-rich nucleus induced reaction around the Fermi energy. Purpose: An investigation is performed to study the IBD results in the transport model. Methods: The antisymmetric molecular dynamics (AMD) model plus the sequential decay model gemini are adopted to simulate the 140 A MeV 58 ,64Ni +9Be reactions. A relative small coalescence radius Rc= 2.5 fm is used for the phase space at t = 500 fm/c to form the hot fragment. Two limitations on the impact parameter (b 1 =0 -2 fm and b 2 =0 -9 fm) are used to study the effect of central collisions in IBD. Results: The isobaric yield ratios (IYRs) for the large-A fragments are found to be suppressed in the symmetric reaction. The IBD results for fragments with neutron excess I = 0 and 1 are obtained. A small difference is found in the IBDs with the b 1 and b 2 limitations in the AMD simulated reactions. The IBD with b 1 and b 2 are quite similar in the AMD + GEMINI simulated reactions. Conclusions: The IBDs for the I =0 and 1 chains are mainly determined by the central collisions, which reflects the nuclear density in the core region of the reaction system. The increasing part of the IBD distribution is found due to the difference between the densities in the peripheral collisions of the reactions. The sequential decay process influences the IBD results. The AMD + GEMINI simulation can better reproduce the experimental IBDs than the AMD simulation.

  6. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    NASA Astrophysics Data System (ADS)

    Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.

    2017-09-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  7. Constant speed control of four-stroke micro internal combustion swing engine

    NASA Astrophysics Data System (ADS)

    Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun

    2015-09-01

    The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.

  8. Accelerated drug release and clearance of PEGylated epirubicin liposomes following repeated injections: a new challenge for sequential low-dose chemotherapy

    PubMed Central

    Yang, Qiang; Ma, Yanling; Zhao, Yongxue; She, Zhennan; Wang, Long; Li, Jie; Wang, Chunling; Deng, Yihui

    2013-01-01

    Background Sequential low-dose chemotherapy has received great attention for its unique advantages in attenuating multidrug resistance of tumor cells. Nevertheless, it runs the risk of producing new problems associated with the accelerated blood clearance phenomenon, especially with multiple injections of PEGylated liposomes. Methods Liposomes were labeled with fluorescent phospholipids of 1,2-dipalmitoyl-snglycero-3-phosphoethanolamine-N-(7-nitro-2-1,3-benzoxadiazol-4-yl) and epirubicin (EPI). The pharmacokinetics profile and biodistribution of the drug and liposome carrier following multiple injections were determined. Meanwhile, the antitumor effect of sequential low-dose chemotherapy was tested. To clarify this unexpected phenomenon, the production of polyethylene glycol (PEG)-specific immunoglobulin M (IgM), drug release, and residual complement activity experiments were conducted in serum. Results The first or sequential injections of PEGylated liposomes within a certain dose range induced the rapid clearance of subsequently injected PEGylated liposomal EPI. Of note, the clearance of EPI was two- to three-fold faster than the liposome itself, and a large amount of EPI was released from liposomes in the first 30 minutes in a complement-activation, direct-dependent manner. The therapeutic efficacy of liposomal EPI following 10 days of sequential injections in S180 tumor-bearing mice of 0.75 mg EPI/kg body weight was almost completely abolished between the sixth and tenth day of the sequential injections, even although the subsequently injected doses were doubled. The level of PEG-specific IgM in the blood increased rapidly, with a larger amount of complement being activated while the concentration of EPI in blood and tumor tissue was significantly reduced. Conclusion Our investigation implied that the accelerated blood clearance phenomenon and its accompanying rapid leakage and clearance of drug following sequential low-dose injections may reverse the unique pharmacokinetic–toxicity profile of liposomes which deserved our attention. Therefore, a more reasonable treatment regime should be selected to lessen or even eliminate this phenomenon. PMID:23576868

  9. Simulated impact of climate change on hydrology of multiple watersheds using traditional and recommended snowmelt runoff model methodology

    USDA-ARS?s Scientific Manuscript database

    For more than three decades, researchers have utilized the Snowmelt Runoff Model (SRM) to test the impacts of climate change on streamflow of snow-fed systems. In this study, the hydrological effects of climate change are modeled over three sequential years using SRM with both typical and recommende...

  10. Biodegradation and detoxification of textile azo dyes by bacterial consortium under sequential microaerophilic/aerobic processes

    PubMed Central

    Lade, Harshad; Kadam, Avinash; Paul, Diby; Govindwar, Sanjay

    2015-01-01

    Release of textile azo dyes to the environment is an issue of health concern while the use of microorganisms has proved to be the best option for remediation. Thus, in the present study, a bacterial consortium consisting of Providencia rettgeri strain HSL1 and Pseudomonas sp. SUK1 has been investigated for degradation and detoxification of structurally different azo dyes. The consortium showed 98-99 % decolorization of all the selected azo dyes viz. Reactive Black 5 (RB 5), Reactive Orange 16 (RO 16), Disperse Red 78 (DR 78) and Direct Red 81 (DR 81) within 12 to 30 h at 100 mg L-1 concentration at 30 ± 0.2 °C under microaerophilic, sequential aerobic/microaerophilic and microaerophilic/aerobic processes. However, decolorization under microaerophilic conditions viz. RB 5 (0.26 mM), RO 16 (0.18 mM), DR 78 (0.20 mM) and DR 81 (0.23 mM) and sequential aerobic/microaerophilic processes viz. RB 5 (0.08 mM), RO 16 (0.06 mM), DR 78 (0.07 mM) and DR 81 (0.09 mM) resulted into the formation of aromatic amines. In distinction, sequential microaerophilic/ aerobic process doesn’t show the formation of amines. Additionally, 62-72 % reduction in total organic carbon content was observed in all the dyes decolorized broths under sequential microaerophilic/aerobic processes suggesting the efficacy of method in mineralization of dyes. Notable induction within the levels of azoreductase and NADH-DCIP reductase (97 and 229 % for RB 5, 55 and 160 % for RO 16, 63 and 196 % for DR 78, 108 and 258 % for DR 81) observed under sequential microaerophilic/aerobic processes suggested their critical involvements in the initial breakdown of azo bonds, whereas, a slight increase in the levels of laccase and veratryl alcohol oxidase confirmed subsequent oxidation of formed amines. Also, the acute toxicity assay with Daphnia magna revealed the nontoxic nature of the dye-degraded metabolites under sequential microaerophilic/aerobic processes. As biodegradation under sequential microaerophilic/aerobic process completely detoxified all the selected textile azo dyes, further efforts should be made to implement such methods for large scale dye wastewater treatment technologies. PMID:26417357

  11. Pathways of proton transfer in the light-driven pump bacteriorhodopsin

    NASA Technical Reports Server (NTRS)

    Lanyi, J. K.

    1993-01-01

    The mechanism of proton transport in the light-driven pump bacteriorhodopsin is beginning to be understood. Light causes the all-trans to 13-cis isomerization of the retinal chromophore. This sets off a sequential and directed series of transient decreases in the pKa's of a) the retinal Schiff base, b) an extracellular proton release complex which includes asp-85, and c) a cytoplasmic proton uptake complex which includes asp-96. The timing of these pKa changes during the photoreaction cycle causes sequential proton transfers which result in the net movement of a proton across the protein, from the cytoplasmic to the extracellular surface.

  12. Rapid code acquisition algorithms employing PN matched filters

    NASA Technical Reports Server (NTRS)

    Su, Yu T.

    1988-01-01

    The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.

  13. Comparative Implementation of High Performance Computing for Power System Dynamic Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng

    Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less

  14. Durable graft-versus-leukaemia effects without donor lymphocyte infusions - results of a phase II study of sequential T-replete allogeneic transplantation for high-risk acute myeloid leukaemia and myelodysplasia.

    PubMed

    Davies, Jeff K; Hassan, Sandra; Sarker, Shah-Jalal; Besley, Caroline; Oakervee, Heather; Smith, Matthew; Taussig, David; Gribben, John G; Cavenagh, Jamie D

    2018-02-01

    Allogeneic haematopoietic stem-cell transplantation remains the only curative treatment for relapsed/refractory acute myeloid leukaemia (AML) and high-risk myelodysplasia but has previously been limited to patients who achieve remission before transplant. New sequential approaches employing T-cell depleted transplantation directly after chemotherapy show promise but are burdened by viral infection and require donor lymphocyte infusions (DLI) to augment donor chimerism and graft-versus-leukaemia effects. T-replete transplantation in sequential approaches could reduce both viral infection and DLI usage. We therefore performed a single-arm prospective Phase II clinical trial of sequential chemotherapy and T-replete transplantation using reduced-intensity conditioning without planned DLI. The primary endpoint was overall survival. Forty-seven patients with relapsed/refractory AML or high-risk myelodysplasia were enrolled; 43 proceeded to transplantation. High levels of donor chimerism were achieved spontaneously with no DLI. Overall survival of transplanted patients was 45% and 33% at 1 and 3 years. Only one patient developed cytomegalovirus disease. Cumulative incidences of treatment-related mortality and relapse were 35% and 20% at 1 year. Patients with relapsed AML and myelodysplasia had the most favourable outcomes. Late-onset graft-versus-host disease protected against relapse. In conclusion, a T-replete sequential transplantation using reduced-intensity conditioning is feasible for relapsed/refractory AML and myelodysplasia and can deliver graft-versus-leukaemia effects without DLI. © 2017 John Wiley & Sons Ltd.

  15. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  16. Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions

    PubMed Central

    Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung

    2014-01-01

    A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498

  17. Sequential transformation of the structural and thermodynamic parameters of the complex particles, combining covalent conjugate (sodium caseinate + maltodextrin) with polyunsaturated lipids stabilized by a plant antioxidant, in the simulated gastro-intestinal conditions in vitro.

    PubMed

    Antipova, Anna S; Zelikina, Darya V; Shumilina, Elena A; Semenova, Maria G

    2016-10-01

    The present work is focused on the structural transformation of the complexes, formed between covalent conjugate (sodium caseinate + maltodextrin) and an equimass mixture of the polyunsaturated lipids (PULs): (soy phosphatidylcholine + triglycerides of flaxseed oil) stabilized by a plant antioxidant (an essential oil of clove buds), in the simulated conditions of the gastrointestinal tract. The conjugate was used here as a food-grade delivery vehicle for the PULs. The release of these PULs at each stage of the simulated digestion was estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially.

    PubMed

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  19. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially

    PubMed Central

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080

  20. A strategy for comprehensive identification of sequential constituents using ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap mass spectrometer, application study on chlorogenic acids in Flos Lonicerae Japonicae.

    PubMed

    Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang

    2016-01-15

    The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Sequentially reweighted TV minimization for CT metal artifact reduction.

    PubMed

    Zhang, Xiaomeng; Xing, Lei

    2013-07-01

    Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.

  2. Particle connectedness and cluster formation in sequential depositions of particles: integral-equation theory.

    PubMed

    Danwanichakul, Panu; Glandt, Eduardo D

    2004-11-15

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  3. Particle connectedness and cluster formation in sequential depositions of particles: Integral-equation theory

    NASA Astrophysics Data System (ADS)

    Danwanichakul, Panu; Glandt, Eduardo D.

    2004-11-01

    We applied the integral-equation theory to the connectedness problem. The method originally applied to the study of continuum percolation in various equilibrium systems was modified for our sequential quenching model, a particular limit of an irreversible adsorption. The development of the theory based on the (quenched-annealed) binary-mixture approximation includes the Ornstein-Zernike equation, the Percus-Yevick closure, and an additional term involving the three-body connectedness function. This function is simplified by introducing a Kirkwood-like superposition approximation. We studied the three-dimensional (3D) system of randomly placed spheres and 2D systems of square-well particles, both with a narrow and with a wide well. The results from our integral-equation theory are in good accordance with simulation results within a certain range of densities.

  4. SUBCOOLING DETECTOR

    DOEpatents

    McCann, J.A.

    1963-12-17

    A system for detecting and measuring directly the subcooling margin in a liquid bulk coolant is described. A thermocouple sensor is electrically heated, and a small amount of nearly stagnant bulk coolant is heated to the boiling point by this heated thermocouple. The sequential measurement of the original ambient temperature, zeroing out this ambient temperature, and then measuring the boiling temperature of the coolant permits direct determination of the subcooling margin of the ambient liquid. (AEC)

  5. Reporting and Reacting: Concurrent Responses to Reported Speech.

    ERIC Educational Resources Information Center

    Holt, Elizabeth

    2000-01-01

    Uses conversation analysis to investigate reported speech in talk-in-interaction. Beginning with an examination of direct and indirect reported speech, the article highlights some of the design features of the former, and the sequential environments in which it occurs. (Author/VWL)

  6. Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch

    PubMed Central

    Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi

    2010-01-01

    Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and ‘memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch. PMID:20212522

  7. Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch.

    PubMed

    Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi

    2010-01-01

    Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and 'memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch.

  8. Misconceived causal explanations for emergent processes.

    PubMed

    Chi, Michelene T H; Roscoe, Rod D; Slotta, James D; Roy, Marguerite; Chase, Catherine C

    2012-01-01

    Studies exploring how students learn and understand science processes such as diffusion and natural selection typically find that students provide misconceived explanations of how the patterns of such processes arise (such as why giraffes' necks get longer over generations, or how ink dropped into water appears to "flow"). Instead of explaining the patterns of these processes as emerging from the collective interactions of all the agents (e.g., both the water and the ink molecules), students often explain the pattern as being caused by controlling agents with intentional goals, as well as express a variety of many other misconceived notions. In this article, we provide a hypothesis for what constitutes a misconceived explanation; why misconceived explanations are so prevalent, robust, and resistant to instruction; and offer one approach of how they may be overcome. In particular, we hypothesize that students misunderstand many science processes because they rely on a generalized version of narrative schemas and scripts (referred to here as a Direct-causal Schema) to interpret them. For science processes that are sequential and stage-like, such as cycles of moon, circulation of blood, stages of mitosis, and photosynthesis, a Direct-causal Schema is adequate for correct understanding. However, for science processes that are non-sequential (or emergent), such as diffusion, natural selection, osmosis, and heat flow, using a Direct Schema to understand these processes will lead to robust misconceptions. Instead, a different type of general schema may be required to interpret non-sequential processes, which we refer to as an Emergent-causal Schema. We propose that students lack this Emergent Schema and teaching it to them may help them learn and understand emergent kinds of science processes such as diffusion. Our study found that directly teaching students this Emergent Schema led to increased learning of the process of diffusion. This article presents a fine-grained characterization of each type of Schema, our instructional intervention, the successes we have achieved, and the lessons we have learned. Copyright © 2011 Cognitive Science Society, Inc.

  9. 3D/2D image registration using weighted histogram of gradient directions

    NASA Astrophysics Data System (ADS)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  10. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Sparse generalized linear model with L0 approximation for feature selection and prediction with big omics data.

    PubMed

    Liu, Zhenqiu; Sun, Fengzhu; McGovern, Dermot P

    2017-01-01

    Feature selection and prediction are the most important tasks for big data mining. The common strategies for feature selection in big data mining are L 1 , SCAD and MC+. However, none of the existing algorithms optimizes L 0 , which penalizes the number of nonzero features directly. In this paper, we develop a novel sparse generalized linear model (GLM) with L 0 approximation for feature selection and prediction with big omics data. The proposed approach approximate the L 0 optimization directly. Even though the original L 0 problem is non-convex, the problem is approximated by sequential convex optimizations with the proposed algorithm. The proposed method is easy to implement with only several lines of code. Novel adaptive ridge algorithms ( L 0 ADRIDGE) for L 0 penalized GLM with ultra high dimensional big data are developed. The proposed approach outperforms the other cutting edge regularization methods including SCAD and MC+ in simulations. When it is applied to integrated analysis of mRNA, microRNA, and methylation data from TCGA ovarian cancer, multilevel gene signatures associated with suboptimal debulking are identified simultaneously. The biological significance and potential clinical importance of those genes are further explored. The developed Software L 0 ADRIDGE in MATLAB is available at https://github.com/liuzqx/L0adridge.

  12. Phase extraction based on iterative algorithm using five-frame crossed fringes in phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Jin, Chengying; Li, Dahai; Kewei, E.; Li, Mengyang; Chen, Pengyu; Wang, Ruiyang; Xiong, Zhao

    2018-06-01

    In phase measuring deflectometry, two orthogonal sinusoidal fringe patterns are separately projected on the test surface and the distorted fringes reflected by the surface are recorded, each with a sequential phase shift. Then the two components of the local surface gradients are obtained by triangulation. It usually involves some complicated and time-consuming procedures (fringe projection in the orthogonal directions). In addition, the digital light devices (e.g. LCD screen and CCD camera) are not error free. There are quantization errors for each pixel of both LCD and CCD. Therefore, to avoid the complex process and improve the reliability of the phase distribution, a phase extraction algorithm with five-frame crossed fringes is presented in this paper. It is based on a least-squares iterative process. Using the proposed algorithm, phase distributions and phase shift amounts in two orthogonal directions can be simultaneously and successfully determined through an iterative procedure. Both a numerical simulation and a preliminary experiment are conducted to verify the validity and performance of this algorithm. Experimental results obtained by our method are shown, and comparisons between our experimental results and those obtained by the traditional 16-step phase-shifting algorithm and between our experimental results and those measured by the Fizeau interferometer are made.

  13. Bayesian approach for assessing non-inferiority in a three-arm trial with pre-specified margin.

    PubMed

    Ghosh, Samiran; Ghosh, Santu; Tiwari, Ram C

    2016-02-28

    Non-inferiority trials are becoming increasingly popular for comparative effectiveness research. However, inclusion of the placebo arm, whenever possible, gives rise to a three-arm trial which has lesser burdensome assumptions than a standard two-arm non-inferiority trial. Most of the past developments in a three-arm trial consider defining a pre-specified fraction of unknown effect size of reference drug, that is, without directly specifying a fixed non-inferiority margin. However, in some recent developments, a more direct approach is being considered with pre-specified fixed margin albeit in the frequentist setup. Bayesian paradigm provides a natural path to integrate historical and current trials' information via sequential learning. In this paper, we propose a Bayesian approach for simultaneous testing of non-inferiority and assay sensitivity in a three-arm trial with normal responses. For the experimental arm, in absence of historical information, non-informative priors are assumed under two situations, namely when (i) variance is known and (ii) variance is unknown. A Bayesian decision criteria is derived and compared with the frequentist method using simulation studies. Finally, several published clinical trial examples are reanalyzed to demonstrate the benefit of the proposed procedure. Copyright © 2015 John Wiley & Sons, Ltd.

  14. A hierarchical analysis of terrestrial ecosystem model Biome-BGC: Equilibrium analysis and model calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thornton, Peter E; Wang, Weile; Law, Beverly E.

    2009-01-01

    The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically supportmore » the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.« less

  15. An abstraction layer for efficient memory management of tabulated chemistry and flamelet solutions

    NASA Astrophysics Data System (ADS)

    Weise, Steffen; Messig, Danny; Meyer, Bernd; Hasse, Christian

    2013-06-01

    A large number of methods for simulating reactive flows exist, some of them, for example, directly use detailed chemical kinetics or use precomputed and tabulated flame solutions. Both approaches couple the research fields computational fluid dynamics and chemistry tightly together using either an online or offline approach to solve the chemistry domain. The offline approach usually involves a method of generating databases or so-called Lookup-Tables (LUTs). As these LUTs are extended to not only contain material properties but interactions between chemistry and turbulent flow, the number of parameters and thus dimensions increases. Given a reasonable discretisation, file sizes can increase drastically. The main goal of this work is to provide methods that handle large database files efficiently. A Memory Abstraction Layer (MAL) has been developed that handles requested LUT entries efficiently by splitting the database file into several smaller blocks. It keeps the total memory usage at a minimum using thin allocation methods and compression to minimise filesystem operations. The MAL has been evaluated using three different test cases. The first rather generic one is a sequential reading operation on an LUT to evaluate the runtime behaviour as well as the memory consumption of the MAL. The second test case is a simulation of a non-premixed turbulent flame, the so-called HM1 flame, which is a well-known test case in the turbulent combustion community. The third test case is a simulation of a non-premixed laminar flame as described by McEnally in 1996 and Bennett in 2000. Using the previously developed solver 'flameletFoam' in conjunction with the MAL, memory consumption and the performance penalty introduced were studied. The total memory used while running a parallel simulation was reduced significantly while the CPU time overhead associated with the MAL remained low.

  16. Galaxy simulations: Kinematics and mock observations

    NASA Astrophysics Data System (ADS)

    Moody, Christopher E.

    2013-08-01

    There are six topics to my thesis, which are: (1) slow rotator production in varied simulation schemes and kinematically decoupled cores and twists in those simulations, (2) the change in number of clumps in radiation pressure and no-radiation pressure simulations, (3) Sunrise experiments and failures including UVJ color-color dust experiments and UVbeta slopes, (4) the Sunrise image pipeline and algorithms. Cosmological simulations of have typically produced too many stars at early times. We find that the additional radiation pressure (RP) feedback suppresses star formation globally by a factor of ~ 3. Despite this reduction, the simulation still overproduces stars by a factor of ~ 2 with respect to the predictions provided by abundance matching methods. In simulations with RP the number of clumps falls dramatically. However, only clumps with masses Mclump/Mdisk ≤ 8% are impacted by the inclusion of RP, and clump counts above this range are comparable. Above this mass, the difference between and RP and no-RP contrast ratios diminishes. If we restrict our selection to galaxies hosting at least a single clump above this mass range then clump numbers, contrast ratios, survival fractions and total clump masses show little discrepancy between RP and no-RP simulations. By creating mock Hubble Space Telescope observations we find that the number of clumps is slightly reduced in simulations with RP. We demonstrate that clumps found in any single gas, stellar, or mock observation image are not necessarily clumps found in another map, and that there are few clumps common to multiple maps. New kinematic observations from ATLAS3D have highlighted the need to understand the evolutionary mechanism leading to a spectrum of fast-rotator and slow-rotators in early-type galaxies. We address the formation of slow and fast rotators through a series of controlled, comprehensive hydrodynamic simulations sampling idealized galaxy merger formation scenarios constructed from model spiral galaxies. We recreate minor and major binary mergers, binary merger trees with multiple progenitors, and multiple sequential mergers. Within each of these categories of formation history, we correlate progenitor gas fraction, mass ratio, orbital pericenter, orbital ellipticity, spin, and kinematically decoupled cores with remnant kinematic properties. We find that binary mergers nearly always form fast rotators, but slow rotators can be formed from zero initial angular momentum configurations and gas-poor mergers. Remnants of binary merger trees are triaxial slow rotators. Sequential mergers form round slow rotators that most resemble the ATLAS3D rotators. We investigate the failure of ART and Sunrise simulation to reproduce the observed distribution of galaxies in the UVJ color-color diagram. No simulated galaxies achieve a color with V-J >1.0 while still being in the blue sequence. I systematically study the underlying sub grid models present in Sunrise to diagnose the source of the discrepancy. The experiments were largely unsuccessful in directly isolating the root of the J-band excess attenuation; however, they are instructive and can guide the intuition in terms of understanding the interplay of stellar emission and dust. These experiments were aimed at understanding the role of the underlying sub grid dust and radiation models, varying the dust geometry, and performing numerical studies of the radiation transfer calculation. Finally, I detail the data pipeline responsible for the creation of galaxy mock observations. The pipeline can be broken into the ART simulation raw data, the dark matter merger tree backbone, the format translation using yt, simulation the radiation transfer in Sunrise, and post-processed image treatments resulting. At every step, I detail the execution of the algorithms, the format of the data, and useful scripts for straightforward analysis.

  17. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  18. Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning

    NASA Astrophysics Data System (ADS)

    Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.

    2005-12-01

    A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.

  19. Sequential Geoacoustic Filtering and Geoacoustic Inversion

    DTIC Science & Technology

    2015-09-30

    and online algorithms. We show here that CS obtains higher resolution than MVDR, even in scenarios, which favor classical high-resolution methods...windows actually performs better than conventional beamforming and MVDR/ MUSIC (see Figs. 1-2). Compressive geoacoustic inversion Geoacoustic...histograms based on 100 Monte Carlo simulations, and c)(CS, exhaustive-search, CBF, MVDR, and MUSIC performance versus SNR. The true source positions

  20. Practical Sequential Design Procedures for Submarine ASW Search Operational Testing: A Simulation Study

    DTIC Science & Technology

    1998-10-01

    The efficient design of a free play , 24 hour per day, operational test (OT) of an ASW search system remains a challenge to the OT community. It will...efficient, realistic, free play , 24 hour per day OT. The basic test control premise described here is to stop the test event if the time without a

  1. A random walk rule for phase I clinical trials.

    PubMed

    Durham, S D; Flournoy, N; Rosenberger, W F

    1997-06-01

    We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.

  2. Spiking neural network model for memorizing sequences with forward and backward recall.

    PubMed

    Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João

    2013-06-01

    We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  3. Outdoor Education Guide-Handbook, Waukesha Public Schools.

    ERIC Educational Resources Information Center

    Vitale, Joseph A.

    Designed by the Waukesha Public Schools (Wisconsin) specifically for an elementary level three-day camping trip at Camp Phantom Lake, this outdoor education guide presents some activities which suggest adaptation. Activity directions, plans, worksheets, evaluation sheets, and illustrations are presented in sequential order for the following…

  4. Certification of Physical Education Teachers.

    ERIC Educational Resources Information Center

    Bentz, Susan K.

    The author discusses various trends in the preparation of physical education teachers, including emphasis on Title IX requirements and handicapped child needs. Future directions in teacher certification are surveyed, and it is urged that certification be based upon sequential training programs rather than course accumulation-credit hour…

  5. Comparison of DNA testing strategies in monitoring human papillomavirus infection prevalence through simulation.

    PubMed

    Lin, Carol Y; Li, Ling

    2016-11-07

    HPV DNA diagnostic tests for epidemiology monitoring (research purpose) or cervical cancer screening (clinical purpose) have often been considered separately. Women with positive Linear Array (LA) polymerase chain reaction (PCR) research test results typically are neither informed nor referred for colposcopy. Recently, a sequential testing by using Hybrid Capture 2 (HC2) HPV clinical test as a triage before genotype by LA has been adopted for monitoring HPV infections. Also, HC2 has been reported as a more feasible screening approach for cervical cancer in low-resource countries. Thus, knowing the performance of testing strategies incorporating HPV clinical test (i.e., HC2-only or using HC2 as a triage before genotype by LA) compared with LA-only testing in measuring HPV prevalence will be informative for public health practice. We conducted a Monte Carlo simulation study. Data were generated using mathematical algorithms. We designated the reported HPV infection prevalence in the U.S. and Latin America as the "true" underlying type-specific HPV prevalence. Analytical sensitivity of HC2 for detecting 14 high-risk (oncogenic) types was considered to be less than LA. Estimated-to-true prevalence ratios and percentage reductions were calculated. When the "true" HPV prevalence was designated as the reported prevalence in the U.S., with LA genotyping sensitivity and specificity of (0.95, 0.95), estimated-to-true prevalence ratios of 14 high-risk types were 2.132, 1.056, 0.958 for LA-only, HC2-only, and sequential testing, respectively. Estimated-to-true prevalence ratios of two vaccine-associated high-risk types were 2.359 and 1.063 for LA-only and sequential testing, respectively. When designated type-specific prevalence of HPV16 and 18 were reduced by 50 %, using either LA-only or sequential testing, prevalence estimates were reduced by 18 %. Estimated-to-true HPV infection prevalence ratios using LA-only testing strategy are generally higher than using HC2-only or using HC2 as a triage before genotype by LA. HPV clinical testing can be incorporated to monitor HPV prevalence or vaccine effectiveness. Caution is needed when comparing apparent prevalence from different testing strategies.

  6. In vitro pharmacodynamics of human simulated exposures of ceftaroline and daptomycin against MRSA, hVISA, and VISA with and without prior vancomycin exposure.

    PubMed

    Bhalodi, Amira A; Hagihara, Mao; Nicolau, David P; Kuti, Joseph L

    2014-01-01

    The effects of prior vancomycin exposure on ceftaroline and daptomycin therapy against methicillin-resistant Staphylococcus aureus (MRSA) have not been widely studied. Humanized free-drug exposures of vancomycin at 1 g every 12 h (q12h), ceftaroline at 600 mg q12h, and daptomycin at 10 mg/kg of body weight q24h were simulated in a 96-h in vitro pharmacodynamic model against three MRSA isolates, including one heteroresistant vancomycin-intermediate S. aureus (hVISA) isolate and one VISA isolate. A total of five regimens were tested: vancomycin, ceftaroline, and daptomycin alone for the entire 96 h, and then sequential therapy with vancomycin for 48 h followed by ceftaroline or daptomycin for 48 h. Microbiological responses were measured by the changes in log10 CFU during 96 h from baseline. Control isolates grew to 9.16 ± 0.32, 9.13 ± 0.14, and 8.69 ± 0.28 log10 CFU for MRSA, hVISA, and VISA, respectively. Vancomycin initially achieved ≥3 log10 CFU reductions against the MRSA and hVISA isolates, followed by regrowth beginning at 48 h; minimal activity was observed against VISA. The change in 96-h log10 CFU was largest for sequential therapy with vancomycin followed by ceftaroline (-5.22 ± 1.2, P = 0.010 versus ceftaroline) and for sequential therapy with vancomycin followed by ceftaroline (-3.60 ± 0.6, P = 0.037 versus daptomycin), compared with daptomycin (-2.24 ± 1.0), vancomycin (-1.40 ± 1.8), and sequential therapy with vancomycin followed by daptomycin (-1.32 ± 1.0, P > 0.5 for the last three regimens). Prior exposure of vancomycin at 1 g q12h reduced the initial microbiological response of daptomycin, particularly for hVISA and VISA isolates, but did not affect the response of ceftaroline. In the scenario of poor vancomycin response for high-inoculum MRSA infection, a ceftaroline-containing regimen may be preferred.

  7. Nutrient Distribution and Absorption in the Colonial Hydroid Podocoryna carnea Is Sequentially Diffusive and Directional.

    PubMed

    Buss, Leo W; Anderson, Christopher P; Perry, Elena K; Buss, Evan D; Bolton, Edward W

    2015-01-01

    The distribution and absorption of ingested protein was characterized within a colony of Podocoryna carnea when a single polyp was fed. Observations were conducted at multiple spatial and temporal scales at three different stages of colony ontogeny with an artificial food item containing Texas Red conjugated albumin. Food pellets were digested and all tracer absorbed by digestive cells within the first 2-3 hours post-feeding. The preponderance of the label was located in the fed polyp and in a transport-induced diffusion pattern surrounding the fed polyp. After 6 hours post-feeding particulates re-appeared in the gastrovascular system and their absorption increased the area over which the nutrients were distributed, albeit still in a pattern that was centered on the fed polyp. At later intervals, tracer became concentrated in some stolon tips, but not in others, despite the proximity of these stolons either to the fed polyp or to adjacent stolons receiving nutrients. Distribution and absorption of nutrients is sequentially diffusive and directional.

  8. A specific PFT and sub-canopy structure for simulating oil palm in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Knohl, A.; Roupsard, O.; Bernoux, M.; LE Maire, G.; Panferov, O.; Kotowska, M.; Meijide, A.

    2015-12-01

    Towards an effort to quantify the effects of rainforests to oil palm conversion on land-atmosphere carbon, water and energy fluxes, a specific plant functional type (PFT) and sub-canopy structure are developed for simulating oil palm within the Community Land Model (CLM4.5). Current global land surface models only simulate annual crops beside natural vegetation. In this study, a multilayer oil palm subroutine is developed in CLM4.5 for simulating oil palm's phenology and carbon and nitrogen allocation. The oil palm has monopodial morphology and sequential phenology of around 40 stacked phytomers, each carrying a large leaf and a fruit bunch, forming a natural multilayer canopy. A sub-canopy phenological and physiological parameterization is thus introduced, so that multiple phytomer components develop simultaneously but according to their different phenological steps (growth, yield and senescence) at different canopy layers. This specific multilayer structure was proved useful for simulating canopy development in terms of leaf area index (LAI) and fruit yield in terms of carbon and nitrogen outputs in Jambi, Sumatra (Fan et al. 2015). The study supports that species-specific traits, such as palm's monopodial morphology and sequential phenology, are necessary representations in terrestrial biosphere models in order to accurately simulate vegetation dynamics and feedbacks to climate. Further, oil palm's multilayer structure allows adding all canopy-level calculations of radiation, photosynthesis, stomatal conductance and respiration, beside phenology, also to the sub-canopy level, so as to eliminate scale mismatch problem among different processes. A series of adaptations are made to the CLM model. Initial results show that the adapted multilayer radiative transfer scheme and the explicit represention of oil palm's canopy structure improve on simulating photosynthesis-light response curve. The explicit photosynthesis and dynamic leaf nitrogen calculations per canopy layer also enhance simulated CO2 flux when compared to eddy covariance flux data. More investigations on energy and water fluxes and nitrogen balance are being conducted. These new schemes would hopefully promote the understanding of climatic effects of the tropical land use transformation system.

  9. Simulated spaceflight effects on mating and pregnancy of rats

    NASA Technical Reports Server (NTRS)

    Sabelman, E. E.; Chetirkin, P. V.; Howard, R. M.

    1981-01-01

    The mating of rats was studied to determine the effects of: simulated reentry stresses at known stages of pregnancy, and full flight simulation, consisting of sequential launch stresses, group housing, mating opportunity, diet, simulated reentry, and postreentry isolation of male and female rats. Uterine contents, adrenal mass and abdominal fat as a proportion of body mass, duration of pregnancy, and number and sex of offspring were studied. It is found that: (1) parturition following full flight simulation was delayed relative to that of controls; (2) litter size was reduced and resorptions increased compared with previous matings in the same group of animals; and (3) abdominal fat was highly elevated in animals that were fed the Soviet paste diet. It is suggested that the combined effects of diet, stress, spacecraft environment, and weightlessness decreased the probability of mating or of viable pregnancies in the Cosmos 1129 flight and control animals.

  10. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  11. Sequential protein unfolding through a carbon nanotube pore

    NASA Astrophysics Data System (ADS)

    Xu, Zhonghe; Zhang, Shuang; Weber, Jeffrey K.; Luan, Binquan; Zhou, Ruhong; Li, Jingyuan

    2016-06-01

    An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability.An assortment of biological processes, like protein degradation and the transport of proteins across membranes, depend on protein unfolding events mediated by nanopore interfaces. In this work, we exploit fully atomistic simulations of an artificial, CNT-based nanopore to investigate the nature of ubiquitin unfolding. With one end of the protein subjected to an external force, we observe non-canonical unfolding behaviour as ubiquitin is pulled through the pore opening. Secondary structural elements are sequentially detached from the protein and threaded into the nanotube, interestingly, the remaining part maintains native-like characteristics. The constraints of the nanopore interface thus facilitate the formation of stable ``unfoldon'' motifs above the nanotube aperture that can exist in the absence of specific native contacts with the other secondary structure. Destruction of these unfoldons gives rise to distinct force peaks in our simulations, providing us with a sensitive probe for studying the kinetics of serial unfolding events. Our detailed analysis of nanopore-mediated protein unfolding events not only provides insight into how related processes might proceed in the cell, but also serves to deepen our understanding of structural arrangements which form the basis for protein conformational stability. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr00410e

  12. Radio Frequency Ablation Registration, Segmentation, and Fusion Tool

    PubMed Central

    McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.

    2008-01-01

    The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716

  13. A group sequential adaptive treatment assignment design for proof of concept and dose selection in headache trials.

    PubMed

    Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph

    2005-06-01

    The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.

  14. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE PAGES

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.; ...

    2015-10-30

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  15. Self-Consistent Chaotic Transport in a High-Dimensional Mean-Field Hamiltonian Map Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martínez-del-Río, D.; del-Castillo-Negrete, D.; Olvera, A.

    We studied the self-consistent chaotic transport in a Hamiltonian mean-field model. This model provides a simplified description of transport in marginally stable systems including vorticity mixing in strong shear flows and electron dynamics in plasmas. Self-consistency is incorporated through a mean-field that couples all the degrees-of-freedom. The model is formulated as a large set of N coupled standard-like area-preserving twist maps in which the amplitude and phase of the perturbation, rather than being constant like in the standard map, are dynamical variables. Of particular interest is the study of the impact of periodic orbits on the chaotic transport and coherentmore » structures. Furthermore, numerical simulations show that self-consistency leads to the formation of a coherent macro-particle trapped around the elliptic fixed point of the system that appears together with an asymptotic periodic behavior of the mean field. To model this asymptotic state, we introduced a non-autonomous map that allows a detailed study of the onset of global transport. A turnstile-type transport mechanism that allows transport across instantaneous KAM invariant circles in non-autonomous systems is discussed. As a first step to understand transport, we study a special type of orbits referred to as sequential periodic orbits. Using symmetry properties we show that, through replication, high-dimensional sequential periodic orbits can be generated starting from low-dimensional periodic orbits. We show that sequential periodic orbits in the self-consistent map can be continued from trivial (uncoupled) periodic orbits of standard-like maps using numerical and asymptotic methods. Normal forms are used to describe these orbits and to find the values of the map parameters that guarantee their existence. Numerical simulations are used to verify the prediction from the asymptotic methods.« less

  16. Robust patella motion tracking using intensity-based 2D-3D registration on dynamic bi-plane fluoroscopy: towards quantitative assessment in MPFL reconstruction surgery

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Esnault, Matthieu; Grupp, Robert; Kosugi, Shinichi; Sato, Yoshinobu

    2016-03-01

    The determination of in vivo motion of multiple-bones using dynamic fluoroscopic images and computed tomography (CT) is useful for post-operative assessment of orthopaedic surgeries such as medial patellofemoral ligament reconstruction. We propose a robust method to measure the 3D motion of multiple rigid objects with high accuracy using a series of bi-plane fluoroscopic images and a multi-resolution, intensity-based, 2D-3D registration. A Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimizer was used with a gradient correlation similarity metric. Four approaches to register three rigid objects (femur, tibia-fibula and patella) were implemented: 1) an individual bone approach registering one bone at a time, each with optimization of a six degrees of freedom (6DOF) parameter, 2) a sequential approach registering one bone at a time but using the previous bone results as the background in DRR generation, 3) a simultaneous approach registering all the bones together (18DOF) and 4) a combination of the sequential and the simultaneous approaches. These approaches were compared in experiments using simulated images generated from the CT of a healthy volunteer and measured fluoroscopic images. Over the 120 simulated frames of motion, the simultaneous approach showed improved registration accuracy compared to the individual approach: with less than 0.68mm root-mean-square error (RMSE) for translation and less than 1.12° RMSE for rotation. A robustness evaluation was conducted with 45 trials of a randomly perturbed initialization showed that the sequential approach improved robustness significantly (74% success rate) compared to the individual bone approach (34% success) for patella registration (femur and tibia-fibula registration had a 100% success rate with each approach).

  17. A time-efficient implementation of Extended Kalman Filter for sequential orbit determination and a case study for onboard application

    NASA Astrophysics Data System (ADS)

    Tang, Jingshi; Wang, Haihong; Chen, Qiuli; Chen, Zhonggui; Zheng, Jinjun; Cheng, Haowen; Liu, Lin

    2018-07-01

    Onboard orbit determination (OD) is often used in space missions, with which mission support can be partially accomplished autonomously, with less dependency on ground stations. In major Global Navigation Satellite Systems (GNSS), inter-satellite link is also an essential upgrade in the future generations. To serve for autonomous operation, sequential OD method is crucial to provide real-time or near real-time solutions. The Extended Kalman Filter (EKF) is an effective and convenient sequential estimator that is widely used in onboard application. The filter requires the solutions of state transition matrix (STM) and the process noise transition matrix, which are always obtained by numerical integration. However, numerically integrating the differential equations is a CPU intensive process and consumes a large portion of the time in EKF procedures. In this paper, we present an implementation that uses the analytical solutions of these transition matrices to replace the numerical calculations. This analytical implementation is demonstrated and verified using a fictitious constellation based on selected medium Earth orbit (MEO) and inclined Geosynchronous orbit (IGSO) satellites. We show that this implementation performs effectively and converges quickly, steadily and accurately in the presence of considerable errors in the initial values, measurements and force models. The filter is able to converge within 2-4 h of flight time in our simulation. The observation residual is consistent with simulated measurement error, which is about a few centimeters in our scenarios. Compared to results implemented with numerically integrated STM, the analytical implementation shows results with consistent accuracy, while it takes only about half the CPU time to filter a 10-day measurement series. The future possible extensions are also discussed to fit in various missions.

  18. Skeletal response to maxillary protraction with and without maxillary expansion: a finite element study.

    PubMed

    Gautam, Pawan; Valiathan, Ashima; Adhikari, Raviraj

    2009-06-01

    The purpose of this finite element study was to evaluate biomechanically 2 treatment modalities-maxillary protraction alone and in combination with maxillary expansion-by comparing the displacement of various craniofacial structures. Two 3-dimensional analytical models were developed from sequential computed tomography scan images taken at 2.5-mm intervals of a dry young skull. AutoCAD software (2004 version, Autodesk, San Rafael, Calif) and ANSYS software (version 10, Belcan Engineering Group, Cincinnati, Ohio) were used. The model consisted of 108,799 solid 10 node 92 elements, 193,633 nodes, and 580,899 degrees of freedom. In the first model, maxillary protraction forces were simulated by applying 1 kg of anterior force 30 degrees downward to the palatal plane. In the second model, a 4-mm midpalatal suture opening and maxillary protraction were simulated. Forward displacement of the nasomaxillary complex with upward and forward rotation was observed with maxillary protraction alone. No rotational tendency was noted when protraction was carried out with 4 mm of transverse expansion. A tendency for anterior maxillary constriction after maxillary protraction was evident. The amounts of displacement in the frontal, vertical, and lateral directions with midpalatal suture opening were greater compared with no opening of the midpalatal suture. The forward and downward displacements of the nasomaxillary complex with maxillary protraction and maxillary expansion more closely approximated the natural growth direction of the maxilla. Displacements of craniofacial structures were more favorable for the treatment of skeletal Class III maxillary retrognathia when maxillary protraction was used with maxillary expansion. Hence, biomechanically, maxillary protraction combined with maxillary expansion appears to be a superior treatment modality for the treatment of maxillary retrognathia than maxillary protraction alone.

  19. Impact of He and H relative depth distributions on the result of sequential He+ and H+ ion implantation and annealing in silicon

    NASA Astrophysics Data System (ADS)

    Cherkashin, N.; Daghbouj, N.; Seine, G.; Claverie, A.

    2018-04-01

    Sequential He++H+ ion implantation, being more effective than the sole implantation of H+ or He+, is used by many to transfer thin layers of silicon onto different substrates. However, due to the poor understanding of the basic mechanisms involved in such a process, the implantation parameters to be used for the efficient delamination of a superficial layer are still subject to debate. In this work, by using various experimental techniques, we have studied the influence of the He and H relative depth-distributions imposed by the ion energies onto the result of the sequential implantation and annealing of the same fluence of He and H ions. Analyzing the characteristics of the blister populations observed after annealing and deducing the composition of the gas they contain from FEM simulations, we show that the trapping efficiency of He atoms in platelets and blisters during annealing depends on the behavior of the vacancies generated by the two implants within the H-rich region before and after annealing. Maximum efficiency of the sequential ion implantation is obtained when the H-rich region is able to trap all implanted He ions, while the vacancies it generated are not available to favor the formation of V-rich complexes after implantation then He-filled nano-bubbles after annealing. A technological option is to implant He+ ions first at such an energy that the damage it generates is located on the deeper side of the H profile.

  20. Sequential programmable self-assembly: Role of cooperative interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonathan D. Halverson; Tkachenko, Alexei V.

    Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less

  1. Sequential programmable self-assembly: Role of cooperative interactions

    DOE PAGES

    Jonathan D. Halverson; Tkachenko, Alexei V.

    2016-03-04

    Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less

  2. Recursive Directional Ligation Approach for Cloning Recombinant Spider Silks.

    PubMed

    Dinjaski, Nina; Huang, Wenwen; Kaplan, David L

    2018-01-01

    Recent advances in genetic engineering have provided a route to produce various types of recombinant spider silks. Different cloning strategies have been applied to achieve this goal (e.g., concatemerization, step-by-step ligation, recursive directional ligation). Here we describe recursive directional ligation as an approach that allows for facile modularity and control over the size of the genetic cassettes. This approach is based on sequential ligation of genetic cassettes (monomers) where the junctions between them are formed without interrupting key gene sequences with additional base pairs.

  3. Kiln for hot-pressing compacts in a continuous manner

    DOEpatents

    Reynolds, C.D Jr.

    1983-08-08

    The invention is directed to a hot pressing furnace or kiln which is capable of preheating, hot pressing, and cooling a plurality of articles in a sequential and continuous manner. The hot pressing furnace of the present invention comprises an elongated, horizontally disposed furnace capable of holding a plurality of displaceable pusher plates each supporting a die body loaded with refractory or ceramic material to be hot pressed. Each of these plates and the die body supported thereby is sequentially pushed through the preheating zone, a temperature stabilizing and a hot pressing zone, and a cooling zone so as to provide a continuous hot-pressing operation of a plurality of articles.

  4. Kiln for hot-pressing compacts in a continuous manner

    DOEpatents

    Reynolds, Jr., Carl D.

    1985-01-01

    The present invention is directed to a hot pressing furnace or kiln which is capable of preheating, hot pressing, and cooling a plurality of articles in a sequential and continuous manner. The hot pressing furnace of the present invention comprises an elongated, horizontally disposed furnace capable of holding a plurality of displaceable pusher plates each supporting a die body loaded with refractory or ceramic material to be hot pressed. Each of these plates and the die body supported thereby is sequentially pushed through the preheating zone, a temperature stabilizing and a hot pressing zone, and a cooling zone so as to provide a continuous hot-pressing operation of a plurality of articles.

  5. Optical fiber switch

    DOEpatents

    Early, James W.; Lester, Charles S.

    2002-01-01

    Optical fiber switches operated by electrical activation of at least one laser light modulator through which laser light is directed into at least one polarizer are used for the sequential transport of laser light from a single laser into a plurality of optical fibers. In one embodiment of the invention, laser light from a single excitation laser is sequentially transported to a plurality of optical fibers which in turn transport the laser light to separate individual remotely located laser fuel ignitors. The invention can be operated electro-optically with no need for any mechanical or moving parts, or, alternatively, can be operated electro-mechanically. The invention can be used to switch either pulsed or continuous wave laser light.

  6. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca; Klein, Ran

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers wasmore » resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the magnitude of MBF errors resulting from motion in the superior/inferior and anterior/posterior directions was similar (up to 250%). Body motion effects were more detrimental for higher resolution PET imaging (2 vs 10 mm full-width at half-maximum), and for motion occurring during the mid-to-late time-frames. Motion correction of the reconstructed dynamic image series resulted in significant reduction in MBF errors, but did not account for the residual PET–CTAC misalignment artifacts. MBF bias was reduced further using global partial-volume correction, and using dynamic alignment of the PET projection data to the CT scan for accurate attenuation correction during image reconstruction. Conclusions: Patient body motion can produce MBF estimation errors up to 500%. To reduce these errors, new motion correction algorithms must be effective in identifying motion in the left/right direction, and in the mid-to-late time-frames, since these conditions produce the largest errors in MBF, particularly for high resolution PET imaging. Ideally, motion correction should be done before or during image reconstruction to eliminate PET-CTAC misalignment artifacts.« less

  7. Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission

    NASA Astrophysics Data System (ADS)

    Huang, Yuechen; Li, Haiyang

    2018-06-01

    This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.

  8. Sequential parallel comparison design with binary and time-to-event outcomes.

    PubMed

    Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason

    2018-04-30

    Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.

  9. When good is stickier than bad: Understanding gain/loss asymmetries in sequential framing effects.

    PubMed

    Sparks, Jehan; Ledgerwood, Alison

    2017-08-01

    Considerable research has demonstrated the power of the current positive or negative frame to shape people's current judgments. But humans must often learn about positive and negative information as they encounter that information sequentially over time. It is therefore crucial to consider the potential importance of sequencing when developing an understanding of how humans think about valenced information. Indeed, recent work looking at sequentially encountered frames suggests that some frames can linger outside the context in which they are first encountered, sticking in the mind so that subsequent frames have a muted effect. The present research builds a comprehensive account of sequential framing effects in both the loss and the gain domains. After seeing information about a potential gain or loss framed in positive terms or negative terms, participants saw the same issue reframed in the opposing way. Across 5 studies and 1566 participants, we find accumulating evidence for the notion that in the gain domain, positive frames are stickier than negative frames for novel but not familiar scenarios, whereas in the loss domain, negative frames are always stickier than positive frames. Integrating regulatory focus theory with the literatures on negativity dominance and positivity offset, we develop a new and comprehensive account of sequential framing effects that emphasizes the adaptive value of positivity and negativity biases in specific contexts. Our findings highlight the fact that research conducted solely in the loss domain risks painting an incomplete and oversimplified picture of human bias and suggest new directions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Preschool Children's Control of Action Outcomes

    ERIC Educational Resources Information Center

    Freier, Livia; Cooper, Richard P.; Mareschal, Denis

    2017-01-01

    Naturalistic goal-directed behaviours require the engagement and maintenance of appropriate levels of cognitive control over relatively extended intervals of time. In two experiments, we examined preschool children's abilities to maintain top-down control throughout the course of a sequential task. Both 3- and 5-year-olds demonstrated good…

  11. Gestalt Imagery: A Critical Factor in Language Comprehension.

    ERIC Educational Resources Information Center

    Bell, Nanci

    1991-01-01

    Lack of gestalt imagery (the ability to create imaged wholes) can contribute to language comprehension disorder characterized by weak reading comprehension, weak oral language comprehension, weak oral language expression, weak written language expression, difficulty following directions, and a weak sense of humor. Sequential stimulation using an…

  12. Flip-Flops in Students' Conceptions of State

    ERIC Educational Resources Information Center

    Herman, G. L.; Zilles, C.; Loui, M. C.

    2012-01-01

    The authors conducted a qualitative interview-based study to reveal students' misconceptions about state in sequential circuits. This paper documents 16 misconceptions of state, how students' conceptions of state shift and change, and students' methodological weaknesses. These misconceptions can be used to inform and direct instruction. This study…

  13. A GPU-based large-scale Monte Carlo simulation method for systems with long-range interactions

    NASA Astrophysics Data System (ADS)

    Liang, Yihao; Xing, Xiangjun; Li, Yaohang

    2017-06-01

    In this work we present an efficient implementation of Canonical Monte Carlo simulation for Coulomb many body systems on graphics processing units (GPU). Our method takes advantage of the GPU Single Instruction, Multiple Data (SIMD) architectures, and adopts the sequential updating scheme of Metropolis algorithm. It makes no approximation in the computation of energy, and reaches a remarkable 440-fold speedup, compared with the serial implementation on CPU. We further use this method to simulate primitive model electrolytes, and measure very precisely all ion-ion pair correlation functions at high concentrations. From these data, we extract the renormalized Debye length, renormalized valences of constituent ions, and renormalized dielectric constants. These results demonstrate unequivocally physics beyond the classical Poisson-Boltzmann theory.

  14. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  15. Transaction costs and sequential bargaining in transferable discharge permit markets.

    PubMed

    Netusil, N R; Braden, J B

    2001-03-01

    Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.

  16. Satisfiability Test with Synchronous Simulated Annealing on the Fujitsu AP1000 Massively-Parallel Multiprocessor

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Biswas, Rupak

    1996-01-01

    Solving the hard Satisfiability Problem is time consuming even for modest-sized problem instances. Solving the Random L-SAT Problem is especially difficult due to the ratio of clauses to variables. This report presents a parallel synchronous simulated annealing method for solving the Random L-SAT Problem on a large-scale distributed-memory multiprocessor. In particular, we use a parallel synchronous simulated annealing procedure, called Generalized Speculative Computation, which guarantees the same decision sequence as sequential simulated annealing. To demonstrate the performance of the parallel method, we have selected problem instances varying in size from 100-variables/425-clauses to 5000-variables/21,250-clauses. Experimental results on the AP1000 multiprocessor indicate that our approach can satisfy 99.9 percent of the clauses while giving almost a 70-fold speedup on 500 processors.

  17. All you need is shape: Predicting shear banding in sand with LS-DEM

    NASA Astrophysics Data System (ADS)

    Kawamoto, Reid; Andò, Edward; Viggiani, Gioacchino; Andrade, José E.

    2018-02-01

    This paper presents discrete element method (DEM) simulations with experimental comparisons at multiple length scales-underscoring the crucial role of particle shape. The simulations build on technological advances in the DEM furnished by level sets (LS-DEM), which enable the mathematical representation of the surface of arbitrarily-shaped particles such as grains of sand. We show that this ability to model shape enables unprecedented capture of the mechanics of granular materials across scales ranging from macroscopic behavior to local behavior to particle behavior. Specifically, the model is able to predict the onset and evolution of shear banding in sands, replicating the most advanced high-fidelity experiments in triaxial compression equipped with sequential X-ray tomography imaging. We present comparisons of the model and experiment at an unprecedented level of quantitative agreement-building a one-to-one model where every particle in the more than 53,000-particle array has its own avatar or numerical twin. Furthermore, the boundary conditions of the experiment are faithfully captured by modeling the membrane effect as well as the platen displacement and tilting. The results show a computational tool that can give insight into the physics and mechanics of granular materials undergoing shear deformation and failure, with computational times comparable to those of the experiment. One quantitative measure that is extracted from the LS-DEM simulations that is currently not available experimentally is the evolution of three dimensional force chains inside and outside of the shear band. We show that the rotations on the force chains are correlated to the rotations in stress principal directions.

  18. Asymmetric predictability and cognitive competition in football penalty shootouts.

    PubMed

    Misirlisoy, Erman; Haggard, Patrick

    2014-08-18

    Sports provide powerful demonstrations of cognitive strategies underlying competitive behavior. Penalty shootouts in football (soccer) involve direct competition between elite players and absorb the attention of millions. The penalty shootout between Germany and England in the 1990 World Cup semifinal was viewed by an estimated 46.49% of the UK population. In a penalty shootout, a goalkeeper must defend their goal without teammate assistance while an opposing series of kickers aim to kick the ball past them into the net. As in many sports, the ball during a penalty kick often approaches too quickly for the goalkeeper to react to its direction of motion; instead, the goalkeeper must guess the likely direction of the kick, and dive in anticipation, if they are to have a chance of saving the shot. We examined all 361 kicks from the 37 penalty shootouts that occurred in World Cup and Euro Cup matches over a 36-year period from 1976 to 2012 and show that goalkeepers displayed a clear sequential bias. Following repeated kicks in the same direction, goalkeepers became increasingly likely to dive in the opposite direction on the next kick. Surprisingly, kickers failed to exploit these goalkeeper biases. Our findings highlight the importance of monitoring and predicting sequential behavior in real-world competition. Penalty shootouts pit one goalkeeper against several kickers in rapid succession. Asymmetries in the cognitive capacities of an individual versus a group could produce significant advantages over opponents. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Three is much more than two in coarsening dynamics of cyclic competitions

    NASA Astrophysics Data System (ADS)

    Mitarai, Namiko; Gunnarson, Ivar; Pedersen, Buster Niels; Rosiek, Christian Anker; Sneppen, Kim

    2016-04-01

    The classical game of rock-paper-scissors has inspired experiments and spatial model systems that address the robustness of biological diversity. In particular, the game nicely illustrates that cyclic interactions allow multiple strategies to coexist for long-time intervals. When formulated in terms of a one-dimensional cellular automata, the spatial distribution of strategies exhibits coarsening with algebraically growing domain size over time, while the two-dimensional version allows domains to break and thereby opens the possibility for long-time coexistence. We consider a quasi-one-dimensional implementation of the cyclic competition, and study the long-term dynamics as a function of rare invasions between parallel linear ecosystems. We find that increasing the complexity from two to three parallel subsystems allows a transition from complete coarsening to an active steady state where the domain size stays finite. We further find that this transition happens irrespective of whether the update is done in parallel for all sites simultaneously or done randomly in sequential order. In both cases, the active state is characterized by localized bursts of dislocations, followed by longer periods of coarsening. In the case of the parallel dynamics, we find that there is another phase transition between the active steady state and the coarsening state within the three-line system when the invasion rate between the subsystems is varied. We identify the critical parameter for this transition and show that the density of active boundaries has critical exponents that are consistent with the directed percolation universality class. On the other hand, numerical simulations with the random sequential dynamics suggest that the system may exhibit an active steady state as long as the invasion rate is finite.

  20. Sequential Infection in Ferrets with Antigenically Distinct Seasonal H1N1 Influenza Viruses Boosts Hemagglutinin Stalk-Specific Antibodies

    PubMed Central

    Kirchenbaum, Greg A.; Carter, Donald M.

    2015-01-01

    ABSTRACT Broadly reactive antibodies targeting the conserved hemagglutinin (HA) stalk region are elicited following sequential infection or vaccination with influenza viruses belonging to divergent subtypes and/or expressing antigenically distinct HA globular head domains. Here, we demonstrate, through the use of novel chimeric HA proteins and competitive binding assays, that sequential infection of ferrets with antigenically distinct seasonal H1N1 (sH1N1) influenza virus isolates induced an HA stalk-specific antibody response. Additionally, stalk-specific antibody titers were boosted following sequential infection with antigenically distinct sH1N1 isolates in spite of preexisting, cross-reactive, HA-specific antibody titers. Despite a decline in stalk-specific serum antibody titers, sequential sH1N1 influenza virus-infected ferrets were protected from challenge with a novel H1N1 influenza virus (A/California/07/2009), and these ferrets poorly transmitted the virus to naive contacts. Collectively, these findings indicate that HA stalk-specific antibodies are commonly elicited in ferrets following sequential infection with antigenically distinct sH1N1 influenza virus isolates lacking HA receptor-binding site cross-reactivity and can protect ferrets against a pathogenic novel H1N1 virus. IMPORTANCE The influenza virus hemagglutinin (HA) is a major target of the humoral immune response following infection and/or seasonal vaccination. While antibodies targeting the receptor-binding pocket of HA possess strong neutralization capacities, these antibodies are largely strain specific and do not confer protection against antigenic drift variant or novel HA subtype-expressing viruses. In contrast, antibodies targeting the conserved stalk region of HA exhibit broader reactivity among viruses within and among influenza virus subtypes. Here, we show that sequential infection of ferrets with antigenically distinct seasonal H1N1 influenza viruses boosts the antibody responses directed at the HA stalk region. Moreover, ferrets possessing HA stalk-specific antibody were protected against novel H1N1 virus infection and did not transmit the virus to naive contacts. PMID:26559834

  1. Calorimetric and Diffractometric Evidence for the Sequential Crystallization of Buffer Components and the Consequential pH Swing in Frozen Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sundaramurthi, Prakash; Shalaev, Evgenyi; Suryanarayanan, Raj

    2010-06-22

    Sequential crystallization of succinate buffer components in the frozen solution has been studied by differential scanning calorimetry and X-ray diffractometry (both laboratory and synchrotron sources). The consequential pH shifts were monitored using a low-temperature electrode. When a solution buffered to pH < pK{sub a2} was cooled from room temperature (RT), the freeze-concentrate pH first increased and then decreased. This was attributed to the sequential crystallization of succinic acid, monosodium succinate, and finally disodium succinate. When buffered to pH > pK{sub a2}, the freeze-concentrate pH first decreased and then increased due to the sequential crystallization of the basic (disodium succinate) followedmore » by the acidic (monosodium succinate and succinic acid) buffer components. XRD provided direct evidence of the crystallization events in the frozen buffer solutions, including the formation of disodium succinate hexahydrate [Na{sub 2}(CH{sub 2}COO){sub 2} {center_dot} 6H{sub 2}O]. When the frozen solution was warmed in a differential scanning calorimeter, multiple endotherms attributable to the melting of buffer components and ice were observed. When the frozen solutions were dried under reduced pressure, ice sublimation was followed by dehydration of the crystalline hexahydrate to a poorly crystalline anhydrate. However, crystalline succinic acid and monosodium succinate were retained in the final lyophiles. The pH and the buffer salt concentration of the prelyo solution influenced the crystalline salt content in the final lyophile. The direction and magnitude of the pH shift in the frozen solution depended on both the initial pH and the buffer concentration. In light of the pH-sensitive nature of a significant fraction of pharmaceuticals (especially proteins), extreme care is needed in both the buffer selection and its concentration.« less

  2. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.

  3. Comparison of different strategies in prenatal screening for Down's syndrome: cost effectiveness analysis of computer simulation.

    PubMed

    Gekas, Jean; Gagné, Geneviève; Bujold, Emmanuel; Douillard, Daniel; Forest, Jean-Claude; Reinharz, Daniel; Rousseau, François

    2009-02-13

    To assess and compare the cost effectiveness of three different strategies for prenatal screening for Down's syndrome (integrated test, sequential screening, and contingent screenings) and to determine the most useful cut-off values for risk. Computer simulations to study integrated, sequential, and contingent screening strategies with various cut-offs leading to 19 potential screening algorithms. The computer simulation was populated with data from the Serum Urine and Ultrasound Screening Study (SURUSS), real unit costs for healthcare interventions, and a population of 110 948 pregnancies from the province of Québec for the year 2001. Cost effectiveness ratios, incremental cost effectiveness ratios, and screening options' outcomes. The contingent screening strategy dominated all other screening options: it had the best cost effectiveness ratio ($C26,833 per case of Down's syndrome) with fewer procedure related euploid miscarriages and unnecessary terminations (respectively, 6 and 16 per 100,000 pregnancies). It also outperformed serum screening at the second trimester. In terms of the incremental cost effectiveness ratio, contingent screening was still dominant: compared with screening based on maternal age alone, the savings were $C30,963 per additional birth with Down's syndrome averted. Contingent screening was the only screening strategy that offered early reassurance to the majority of women (77.81%) in first trimester and minimised costs by limiting retesting during the second trimester (21.05%). For the contingent and sequential screening strategies, the choice of cut-off value for risk in the first trimester test significantly affected the cost effectiveness ratios (respectively, from $C26,833 to $C37,260 and from $C35,215 to $C45,314 per case of Down's syndrome), the number of procedure related euploid miscarriages (from 6 to 46 and from 6 to 45 per 100,000 pregnancies), and the number of unnecessary terminations (from 16 to 26 and from 16 to 25 per 100,000 pregnancies). Contingent screening, with a first trimester cut-off value for high risk of 1 in 9, is the preferred option for prenatal screening of women for pregnancies affected by Down's syndrome.

  4. High performance hybrid functional Petri net simulations of biological pathway models on CUDA.

    PubMed

    Chalkidis, Georgios; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Hybrid functional Petri nets are a wide-spread tool for representing and simulating biological models. Due to their potential of providing virtual drug testing environments, biological simulations have a growing impact on pharmaceutical research. Continuous research advancements in biology and medicine lead to exponentially increasing simulation times, thus raising the demand for performance accelerations by efficient and inexpensive parallel computation solutions. Recent developments in the field of general-purpose computation on graphics processing units (GPGPU) enabled the scientific community to port a variety of compute intensive algorithms onto the graphics processing unit (GPU). This work presents the first scheme for mapping biological hybrid functional Petri net models, which can handle both discrete and continuous entities, onto compute unified device architecture (CUDA) enabled GPUs. GPU accelerated simulations are observed to run up to 18 times faster than sequential implementations. Simulating the cell boundary formation by Delta-Notch signaling on a CUDA enabled GPU results in a speedup of approximately 7x for a model containing 1,600 cells.

  5. Night vision goggle stimulation using LCoS and DLP projection technology, which is better?

    NASA Astrophysics Data System (ADS)

    Ali, Masoud H.; Lyon, Paul; De Meerleer, Peter

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  6. Scorpion Hybrid Optical-based Inertial Tracker (HObIT) test results

    NASA Astrophysics Data System (ADS)

    Atac, Robert; Spink, Scott; Calloway, Tom; Foxlin, Eric

    2014-06-01

    High fidelity night-vision training has become important for many of the simulation systems being procured today. The end-users of these simulation-training systems prefer using their actual night-vision goggle (NVG) headsets. This requires that the visual display system stimulate the NVGs in a realistic way. Historically NVG stimulation was done with cathode-ray tube (CRT) projectors. However, this technology became obsolete and in recent years training simulators do NVG stimulation with laser, LCoS and DLP projectors. The LCoS and DLP projection technologies have emerged as the preferred approach for the stimulation of NVGs. Both LCoS and DLP technologies have advantages and disadvantages for stimulating NVGs. LCoS projectors can have more than 5-10 times the contrast capability of DLP projectors. The larger the difference between the projected black level and the brightest object in a scene, the better the NVG stimulation effects can be. This is an advantage of LCoS technology, especially when the proper NVG wavelengths are used. Single-chip DLP projectors, even though they have much reduced contrast compared to LCoS projectors, can use LED illuminators in a sequential red-green-blue fashion to create a projected image. It is straightforward to add an extra infrared (NVG wavelength) LED into this sequential chain of LED illumination. The content of this NVG channel can be independent of the visible scene, which allows effects to be added that can compensate for the lack of contrast inherent in a DLP device. This paper will expand on the differences between LCoS and DLP projectors for stimulating NVGs and summarize the benefits of both in night-vision simulation training systems.

  7. Directed functional connectivity matures with motor learning in a cortical pattern generator.

    PubMed

    Day, Nancy F; Terleski, Kyle L; Nykamp, Duane Q; Nick, Teresa A

    2013-02-01

    Sequential motor skills may be encoded by feedforward networks that consist of groups of neurons that fire in sequence (Abeles 1991; Long et al. 2010). However, there has been no evidence of an anatomic map of activation sequence in motor control circuits, which would be potentially detectable as directed functional connectivity of coactive neuron groups. The proposed pattern generator for birdsong, the HVC (Long and Fee 2008; Vu et al. 1994), contains axons that are preferentially oriented in the rostrocaudal axis (Nottebohm et al. 1982; Stauffer et al. 2012). We used four-tetrode recordings to assess the activity of ensembles of single neurons along the rostrocaudal HVC axis in anesthetized zebra finches. We found an axial, polarized neural network in which sequential activity is directionally organized along the rostrocaudal axis in adult males, who produce a stereotyped song. Principal neurons fired in rostrocaudal order and with interneurons that were rostral to them, suggesting that groups of excitatory neurons fire at the leading edge of travelling waves of inhibition. Consistent with the synchronization of neurons by caudally travelling waves of inhibition, the activity of interneurons was more coherent in the orthogonal mediolateral axis than in the rostrocaudal axis. If directed functional connectivity within the HVC is important for stereotyped, learned song, then it may be lacking in juveniles, which sing a highly variable song. Indeed, we found little evidence for network directionality in juveniles. These data indicate that a functionally directed network within the HVC matures during sensorimotor learning and may underlie vocal patterning.

  8. Directed functional connectivity matures with motor learning in a cortical pattern generator

    PubMed Central

    Day, Nancy F.; Terleski, Kyle L.; Nykamp, Duane Q.

    2013-01-01

    Sequential motor skills may be encoded by feedforward networks that consist of groups of neurons that fire in sequence (Abeles 1991; Long et al. 2010). However, there has been no evidence of an anatomic map of activation sequence in motor control circuits, which would be potentially detectable as directed functional connectivity of coactive neuron groups. The proposed pattern generator for birdsong, the HVC (Long and Fee 2008; Vu et al. 1994), contains axons that are preferentially oriented in the rostrocaudal axis (Nottebohm et al. 1982; Stauffer et al. 2012). We used four-tetrode recordings to assess the activity of ensembles of single neurons along the rostrocaudal HVC axis in anesthetized zebra finches. We found an axial, polarized neural network in which sequential activity is directionally organized along the rostrocaudal axis in adult males, who produce a stereotyped song. Principal neurons fired in rostrocaudal order and with interneurons that were rostral to them, suggesting that groups of excitatory neurons fire at the leading edge of travelling waves of inhibition. Consistent with the synchronization of neurons by caudally travelling waves of inhibition, the activity of interneurons was more coherent in the orthogonal mediolateral axis than in the rostrocaudal axis. If directed functional connectivity within the HVC is important for stereotyped, learned song, then it may be lacking in juveniles, which sing a highly variable song. Indeed, we found little evidence for network directionality in juveniles. These data indicate that a functionally directed network within the HVC matures during sensorimotor learning and may underlie vocal patterning. PMID:23175804

  9. A liquid-crystal-on-silicon color sequential display using frame buffer pixel circuits

    NASA Astrophysics Data System (ADS)

    Lee, Sangrok

    Next generation liquid-crystal-on-silicon (LCOS) high definition (HD) televisions and image projection displays will need to be low-cost and high quality to compete with existing systems based on digital micromirror devices (DMDs), plasma displays, and direct view liquid crystal displays. In this thesis, a novel frame buffer pixel architecture that buffers data for the next image frame while displaying the current frame, offers such a competitive solution is presented. The primary goal of the thesis is to demonstrate the LCOS microdisplay architecture for high quality image projection displays and at potentially low cost. The thesis covers four main research areas: new frame buffer pixel circuits to improve the LCOS performance, backplane architecture design and testing, liquid crystal modes for the LCOS microdisplay, and system integration and demonstration. The design requirements for the LCOS backplane with a 64 x 32 pixel array are addressed and measured electrical characteristics matches to computer simulation results. Various liquid crystal (LC) modes applicable for LCOS microdisplays and their physical properties are discussed. One- and two-dimensional director simulations are performed for the selected LC modes. Test liquid crystal cells with the selected LC modes are made and their electro-optic effects are characterized. The 64 x 32 LCOS microdisplays fabricated with the best LC mode are optically tested with interface circuitry. The characteristics of the LCOS microdisplays are summarized with the successful demonstration.

  10. Optical Analysis of Transparent Polymeric Material Exposed to Simulated Space Environment

    NASA Technical Reports Server (NTRS)

    Edwards, David L.; Finckenor, Miria M.

    1999-01-01

    Transparent polymeric materials are being designed and utilized as solar concentrating lenses for spacecraft power and propulsion systems. These polymeric lenses concentrate solar energy onto energy conversion devices such as solar cells and thermal energy systems. The conversion efficiency is directly related to the transmissivity of the polymeric lens. The Environmental Effects Group of the Marshall Space Flight Center's Materials, Processes, and Manufacturing Department exposed a variety of materials to a simulated space environment and evaluated them for an, change in optical transmission. These materials include Lexan(TM), polyethylene terephthalate (PET). several formulations of Tefzel(TM). and Teflon(TM), and silicone DC 93-500. Samples were exposed to a minimum of 1000 Equivalent Sun Hours (ESH) of near-UV radiation (250 - 400 nm wavelength). Data will be presented on materials exposed to charged particle radiation equivalent to a five-year dose in geosynchronous orbit. These exposures were performed in MSFC's Combined Environmental Effects Test Chamber, a unique facility with the capability to expose materials simultaneously or sequentially to protons, low-energy electrons, high-energy electrons, near UV radiation and vacuum UV radiation.Prolonged exposure to the space environment will decrease the polymer film's transmission and thus reduce the conversion efficiency. A method was developed to normalize the transmission loss and thus rank the materials according to their tolerance to space environmental exposure. Spectral results and the material ranking according to transmission loss are presented.

  11. Pixel-level tunable liquid crystal lenses for auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun

    2014-02-01

    Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.

  12. Engine With Regression and Neural Network Approximators Designed

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2001-01-01

    At the NASA Glenn Research Center, the NASA engine performance program (NEPP, ref. 1) and the design optimization testbed COMETBOARDS (ref. 2) with regression and neural network analysis-approximators have been coupled to obtain a preliminary engine design methodology. The solution to a high-bypass-ratio subsonic waverotor-topped turbofan engine, which is shown in the preceding figure, was obtained by the simulation depicted in the following figure. This engine is made of 16 components mounted on two shafts with 21 flow stations. The engine is designed for a flight envelope with 47 operating points. The design optimization utilized both neural network and regression approximations, along with the cascade strategy (ref. 3). The cascade used three algorithms in sequence: the method of feasible directions, the sequence of unconstrained minimizations technique, and sequential quadratic programming. The normalized optimum thrusts obtained by the three methods are shown in the following figure: the cascade algorithm with regression approximation is represented by a triangle, a circle is shown for the neural network solution, and a solid line indicates original NEPP results. The solutions obtained from both approximate methods lie within one standard deviation of the benchmark solution for each operating point. The simulation improved the maximum thrust by 5 percent. The performance of the linear regression and neural network methods as alternate engine analyzers was found to be satisfactory for the analysis and operation optimization of air-breathing propulsion engines (ref. 4).

  13. Mechanical stability analysis of the protein L immunoglobulin-binding domain by full alanine screening using molecular dynamics simulations.

    PubMed

    Glyakina, Anna V; Likhachev, Ilya V; Balabaev, Nikolay K; Galzitskaya, Oxana V

    2015-03-01

    This article is the first to study the mechanical properties of the immunoglobulin-binding domain of protein L (referred to as protein L) and its mutants at the atomic level. In the structure of protein L, each amino acid residue (except for alanines and glycines) was replaced sequentially by alanine. Thus, 49 mutants of protein L were obtained. The proteins were stretched at their termini at constant velocity using molecular dynamics simulations in water, i.e. by forced unfolding. 19 out of 49 mutations resulted in a large decrease of mechanical protein stability. These amino acids were affecting either the secondary structure (11 mutations) or loop structures (8 mutations) of protein L. Analysis of mechanical unfolding of the generated protein that has the same topology as protein L but consists of only alanines and glycines allows us to suggest that the mechanical stability of proteins, and specifically protein L, is determined by interactions between certain amino acid residues, although the unfolding pathway depends on the protein topology. This insight can now be used to modulate the mechanical properties of proteins and their unfolding pathways in the desired direction for using them in various biochips, biosensors and biomaterials for medicine, industry, and household purposes. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Modified subaperture tool influence functions of a flat-pitch polisher with reverse-calculated material removal rate.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-04-10

    Numerical simulation of subaperture tool influence functions (TIF) is widely known as a critical procedure in computer-controlled optical surfacing. However, it may lack practicability in engineering because the emulation TIF (e-TIF) has some discrepancy with the practical TIF (p-TIF), and the removal rate could not be predicted by simulations. Prior to the polishing of a formal workpiece, opticians have to conduct TIF spot experiments on another sample to confirm the p-TIF with a quantitative removal rate, which is difficult and time-consuming for sequential polishing runs with different tools. This work is dedicated to applying these e-TIFs into practical engineering by making improvements from two aspects: (1) modifies the pressure distribution model of a flat-pitch polisher by finite element analysis and least square fitting methods to make the removal shape of e-TIFs closer to p-TIFs (less than 5% relative deviation validated by experiments); (2) predicts the removal rate of e-TIFs by reverse calculating the material removal volume of a pre-polishing run to the formal workpiece (relative deviations of peak and volume removal rate were validated to be less than 5%). This can omit TIF spot experiments for the particular flat-pitch tool employed and promote the direct usage of e-TIFs in the optimization of a dwell time map, which can largely save on cost and increase fabrication efficiency.

  15. Teacher Variation in Concept Presentation in BSCS Curriculum Program

    ERIC Educational Resources Information Center

    Gallagher, James J.

    2015-01-01

    The classroom, with its complex social structure and kaleidoscope of cognitive and phycho-sociological variables, has not often been the object of serious research. Content area specialists have concentrated on the sequential organization of materials and have left the direct applications of these materials, either to the intuitive strategies of…

  16. The Christensen Rhetoric Program.

    ERIC Educational Resources Information Center

    Tufte, Virginia

    1969-01-01

    Designed to instruct teachers as well as high school or college students in improving their writing, the Christensen Rhetoric Program is a sequential, cumulative program, published in kit form. The kit includes a script with lectures for the teacher, directions for using 200 transparencies on an overhead projector, and student workbooks which…

  17. A Sequential Insect Dispenser for Behavioral Experiments

    ERIC Educational Resources Information Center

    Gans, Carl; Mix, Harold

    1974-01-01

    Describes the construction and operation of an automatic insect dispenser suitable for feeding small vertebrates that are being maintained for behavioral experiments. The food morsels are squirted from their chambers an an air jet, and may be directed at a particluar portion of the cage or distributed to different areas. (JR)

  18. Sport Progressions.

    ERIC Educational Resources Information Center

    Clumpner, Roy A.

    This book, which is primarily for secondary physical education teachers, presents a sequential approach to teaching skills that are essential to eight sports. The activities and lead-up games included in the book put beginning students directly into game-like situations where they can practice skills. Each chapter begins with a background of the…

  19. Enacting Power Asymmetries in Reported Exchanges in the Narratives of Former Slaves

    ERIC Educational Resources Information Center

    Van De Mieroop, Dorien; Clifton, Jonathan

    2013-01-01

    Direct reported speech has been described as serving many functions in stories, such as increasing vividness, creating authenticity, and enhancing audience involvement. Drawing on Bamberg's model of positioning and focusing on reported exchanges, we argue that through its "constructed sequentiality" and its use of discourse strategies, direct…

  20. Sequential Analysis of Autonomic Arousal and Self-Injurious Behavior

    ERIC Educational Resources Information Center

    Hoch, John; Symons, Frank; Sng, Sylvia

    2013-01-01

    There have been limited direct tests of the hypothesis that self-injurious behavior (SIB) regulates arousal. In this study, two autonomic biomarkers for physiological arousal (heart rate [HR] and the high-frequency [HF] component of heart rate variability [HRV]) were investigated in relation to SIB for 3 participants with intellectual…

  1. Buying Hearts and Minds: Modeling Popular Support During an Insurgency Via a Sequential Vote-Buying Game

    DTIC Science & Technology

    2011-06-08

    the government and insurgents can “pay” individuals for their support. These payments can take the form of direct bribes or the provision of benefits...form of direct bribes or the provision of benefits, such as building schools and roads. In the model, an individual supports the government by providing...Proposition 5 . . . . . . . . . . . . . . . . . . . . . . . 29 Figure 3.7 Coercion’s impact on the government’s bribe . . . . . . . . . . . . . . 37 Figure 4.1

  2. Direct Estimation of Structure and Motion from Multiple Frames

    DTIC Science & Technology

    1990-03-01

    sequential frames in an image sequence. As a consequence, the information that can be extracted from a single optical flow field is limited to a snapshot of...researchers have developed techniques that extract motion and structure inform.4tion without computation of the optical flow. Best known are the "direct...operated iteratively on a sequence of images to recover structure. It required feature extraction and matching. Broida and Chellappa [9] suggested the use of

  3. The Effects of Training on Anxiety and Task Performance in Simulated Suborbital Spaceflight.

    PubMed

    Blue, Rebecca S; Bonato, Frederick; Seaton, Kimberly; Bubka, Andrea; Vardiman, Johnené L; Mathers, Charles; Castleberry, Tarah L; Vanderploeg, James M

    2017-07-01

    In commercial spaceflight, anxiety could become mission-impacting, causing negative experiences or endangering the flight itself. We studied layperson response to four varied-length training programs (ranging from 1 h-2 d of preparation) prior to centrifuge simulation of launch and re-entry acceleration profiles expected during suborbital spaceflight. We examined subject task execution, evaluating performance in high-stress conditions. We sought to identify any trends in demographics, hemodynamics, or similar factors in subjects with the highest anxiety or poorest tolerance of the experience. Volunteers participated in one of four centrifuge training programs of varied complexity and duration, culminating in two simulated suborbital spaceflights. At most, subjects underwent seven centrifuge runs over 2 d, including two +Gz runs (peak +3.5 Gz, Run 2) and two +Gx runs (peak +6.0 Gx, Run 4) followed by three runs approximating suborbital spaceflight profiles (combined +Gx and +Gz, peak +6.0 Gx and +4.0 Gz). Two cohorts also received dedicated anxiety-mitigation training. Subjects were evaluated on their performance on various tasks, including a simulated emergency. Participating in 2-7 centrifuge exposures were 148 subjects (105 men, 43 women, age range 19-72 yr, mean 39.4 ± 13.2 yr, body mass index range 17.3-38.1, mean 25.1 ± 3.7). There were 10 subjects who withdrew or limited their G exposure; history of motion sickness was associated with opting out. Shorter length training programs were associated with elevated hemodynamic responses. Single-directional G training did not significantly improve tolerance. Training programs appear best when high fidelity and sequential exposures may improve tolerance of physical/psychological flight stressors. The studied variables did not predict anxiety-related responses to these centrifuge profiles.Blue RS, Bonato F, Seaton K, Bubka A, Vardiman JL, Mathers C, Castleberry TL, Vanderploeg JM. The effects of training on anxiety and task performance in simulated suborbital spaceflight. Aerosp Med Hum Perform. 2017; 88(7):641-650.

  4. Monte Carlo simulation methodology for the reliabilty of aircraft structures under damage tolerance considerations

    NASA Astrophysics Data System (ADS)

    Rambalakos, Andreas

    Current federal aviation regulations in the United States and around the world mandate the need for aircraft structures to meet damage tolerance requirements through out the service life. These requirements imply that the damaged aircraft structure must maintain adequate residual strength in order to sustain its integrity that is accomplished by a continuous inspection program. The multifold objective of this research is to develop a methodology based on a direct Monte Carlo simulation process and to assess the reliability of aircraft structures. Initially, the structure is modeled as a parallel system with active redundancy comprised of elements with uncorrelated (statistically independent) strengths and subjected to an equal load distribution. Closed form expressions for the system capacity cumulative distribution function (CDF) are developed by expanding the current expression for the capacity CDF of a parallel system comprised by three elements to a parallel system comprised with up to six elements. These newly developed expressions will be used to check the accuracy of the implementation of a Monte Carlo simulation algorithm to determine the probability of failure of a parallel system comprised of an arbitrary number of statistically independent elements. The second objective of this work is to compute the probability of failure of a fuselage skin lap joint under static load conditions through a Monte Carlo simulation scheme by utilizing the residual strength of the fasteners subjected to various initial load distributions and then subjected to a new unequal load distribution resulting from subsequent fastener sequential failures. The final and main objective of this thesis is to present a methodology for computing the resulting gradual deterioration of the reliability of an aircraft structural component by employing a direct Monte Carlo simulation approach. The uncertainties associated with the time to crack initiation, the probability of crack detection, the exponent in the crack propagation rate (Paris equation) and the yield strength of the elements are considered in the analytical model. The structural component is assumed to consist of a prescribed number of elements. This Monte Carlo simulation methodology is used to determine the required non-periodic inspections so that the reliability of the structural component will not fall below a prescribed minimum level. A sensitivity analysis is conducted to determine the effect of three key parameters on the specification of the non-periodic inspection intervals: namely a parameter associated with the time to crack initiation, the applied nominal stress fluctuation and the minimum acceptable reliability level.

  5. Developing of operational hydro-meteorological simulating and displaying system

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Shih, D.; Chen, C.

    2010-12-01

    Hydrological hazards, which often occur in conjunction with extreme precipitation events, are the most frequent type of natural disaster in Taiwan. Hence, the researchers at the Taiwan Typhoon and Flood Research Institute (TTFRI) are devoted to analyzing and gaining a better understanding of the causes and effects of natural disasters, and in particular, typhoons and floods. The long-term goal of the TTFRI is to develop a unified weather-hydrological-oceanic model suitable for simulations with local parameterizations in Taiwan. The development of a fully coupled weather-hydrology interaction model is not yet completed but some operational hydro-meteorological simulations are presented as a step in the direction of completing a full model. The predicted rainfall data from Weather Research Forecasting (WRF) are used as our meteorological forcing on watershed modeling. The hydrology and hydraulic modeling are conducted by WASH123D numerical model. And the WRF/WASH123D coupled system is applied to simulate floods during the typhoon landfall periods. The daily operational runs start at 04UTC, 10UTC, 16UTC and 22UTC, about 4 hours after data downloaded from NCEP GFS. This system will execute 72-hr weather forecasts. The simulation of WASH123D will sequentially trigger after receiving WRF rainfall data. This study presents the preliminary framework of establishing this system, and our goal is to build this earlier warning system to alert the public form dangerous. The simulation results are further display by a 3D GIS web service system. This system is established following the Open Geospatial Consortium (OGC) standardization process for GIS web service, such as Web Map Service (WMS) and Web Feature Service (WFS). The traditional 2D GIS data, such as high resolution aerial photomaps and satellite images are integrated into 3D landscape model. The simulated flooding and inundation area can be dynamically mapped on Wed 3D world. The final goal of this system is to real-time forecast flood and the results can be visually displayed on the virtual catchment. The policymaker can easily and real-time gain visual information for decision making at any site through internet.

  6. Development of a Prototype Simulation Executive with Zooming in the Numerical Propulsion System Simulation

    NASA Technical Reports Server (NTRS)

    Reed, John A.; Afjeh, Abdollah A.

    1995-01-01

    A major difficulty in designing aeropropulsion systems is that of identifying and understanding the interactions between the separate engine components and disciplines (e.g., fluid mechanics, structural mechanics, heat transfer, material properties, etc.). The traditional analysis approach is to decompose the system into separate components with the interaction between components being evaluated by the application of each of the single disciplines in a sequential manner. Here, one discipline uses information from the calculation of another discipline to determine the effects of component coupling. This approach, however, may not properly identify the consequences of these effects during the design phase, leaving the interactions to be discovered and evaluated during engine testing. This contributes to the time and cost of developing new propulsion systems as, typically, several design-build-test cycles are needed to fully identify multidisciplinary effects and reach the desired system performance. The alternative to sequential isolated component analysis is to use multidisciplinary coupling at a more fundamental level. This approach has been made more plausible due to recent advancements in computation simulation along with application of concurrent engineering concepts. Computer simulation systems designed to provide an environment which is capable of integrating the various disciplines into a single simulation system have been proposed and are currently being developed. One such system is being developed by the Numerical Propulsion System Simulation (NPSS) project. The NPSS project, being developed at the Interdisciplinary Technology Office at the NASA Lewis Research Center is a 'numerical test cell' designed to provide for comprehensive computational design and analysis of aerospace propulsion systems. It will provide multi-disciplinary analyses on a variety of computational platforms, and a user-interface consisting of expert systems, data base management and visualization tools, to allow the designer to investigate the complex interactions inherent in these systems. An interactive programming software system, known as the Application Visualization System (AVS), was utilized for the development of the propulsion system simulation. The modularity of this system provides the ability to couple propulsion system components, as well as disciplines, and provides for the ability to integrate existing, well established analysis codes into the overall system simulation. This feature allows the user to customize the simulation model by inserting desired analysis codes. The prototypical simulation environment for multidisciplinary analysis, called Turbofan Engine System Simulation (TESS), which incorporates many of the characteristics of the simulation environment proposed herein, is detailed.

  7. Sequential light programs shape kale (Brassica napus) sprout appearance and alter metabolic and nutrient content

    PubMed Central

    Carvalho, Sofia D; Folta, Kevin M

    2014-01-01

    Different light wavelengths have specific effects on plant growth and development. Narrow-bandwidth light-emitting diode (LED) lighting may be used to directionally manipulate size, color and metabolites in high-value fruits and vegetables. In this report, Red Russian kale (Brassica napus) seedlings were grown under specific light conditions and analyzed for photomorphogenic responses, pigment accumulation and nutraceutical content. The results showed that this genotype responds predictably to darkness, blue and red light, with suppression of hypocotyl elongation, development of pigments and changes in specific metabolites. However, these seedlings were relatively hypersensitive to far-red light, leading to uncharacteristically short hypocotyls and high pigment accumulation, even after growth under very low fluence rates (<1 μmol m−2 s−1). General antioxidant levels and aliphatic glucosinolates are elevated by far-red light treatments. Sequential treatments of darkness, blue light, red light and far-red light were applied throughout sprout development to alter final product quality. These results indicate that sequential treatment with narrow-bandwidth light may be used to affect key economically important traits in high-value crops. PMID:26504531

  8. A sequential multi-target Mps1 phosphorylation cascade promotes spindle checkpoint signaling.

    PubMed

    Ji, Zhejian; Gao, Haishan; Jia, Luying; Li, Bing; Yu, Hongtao

    2017-01-10

    The master spindle checkpoint kinase Mps1 senses kinetochore-microtubule attachment and promotes checkpoint signaling to ensure accurate chromosome segregation. The kinetochore scaffold Knl1, when phosphorylated by Mps1, recruits checkpoint complexes Bub1-Bub3 and BubR1-Bub3 to unattached kinetochores. Active checkpoint signaling ultimately enhances the assembly of the mitotic checkpoint complex (MCC) consisting of BubR1-Bub3, Mad2, and Cdc20, which inhibits the anaphase-promoting complex or cyclosome bound to Cdc20 (APC/C Cdc20 ) to delay anaphase onset. Using in vitro reconstitution, we show that Mps1 promotes APC/C inhibition by MCC components through phosphorylating Bub1 and Mad1. Phosphorylated Bub1 binds to Mad1-Mad2. Phosphorylated Mad1 directly interacts with Cdc20. Mutations of Mps1 phosphorylation sites in Bub1 or Mad1 abrogate the spindle checkpoint in human cells. Therefore, Mps1 promotes checkpoint activation through sequentially phosphorylating Knl1, Bub1, and Mad1. This sequential multi-target phosphorylation cascade makes the checkpoint highly responsive to Mps1 and to kinetochore-microtubule attachment.

  9. NASA DOE POD NDE Capabilities Data Book

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.

  10. Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber

    NASA Technical Reports Server (NTRS)

    Routh, D. E.; Sharma, G. C. (Inventor)

    1982-01-01

    The processing of wafer devices to form multilevel interconnects for microelectronic circuits is described. The method is directed to performing the sequential steps of etching the via, removing the photo resist pattern, back sputtering the entire wafer surface and depositing the next layer of interconnect material under common vacuum conditions without exposure to atmospheric conditions. Apparatus for performing the method includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a DC magnetron sputtering system. A gas inlet is provided in the chamber for the introduction of various gases to the vacuum chamber and the creation of various gas plasma during the sputtering steps.

  11. Bridging the clinician/researcher gap with systemic research: the case for process research, dyadic, and sequential analysis.

    PubMed

    Oka, Megan; Whiting, Jason

    2013-01-01

    In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.

  12. Rate-dependent inverse-addition beta-selective mannosylation and contiguous sequential glycosylation involving beta-mannosidic bond formation.

    PubMed

    Chang, Shih-Sheng; Shih, Che-Hao; Lai, Kwun-Cheng; Mong, Kwok-Kong Tony

    2010-05-03

    The beta-selectivity of mannosylation has been found to be dependent on the addition rate of the mannosyl trichloroacetimidate donor in an inverse-addition (I-A) procedure. This rate dependent I-A procedure can improve the selectivity of direct beta-mannosylation and is applicable to orthogonal glycosylations of thioglycoside acceptors. Further elaboration of this novel procedure enables the development of the contiguous sequential glycosylation strategy, which streamlines the preparation of oligosaccharides invoking beta-mannosidic bond formation. The synthetic utility of the contiguous glycosylation strategy was demonstrated by the preparation of the trisaccharide core of human N-linked glycoproteins and the trisaccharide repeating unit of the O-specific polysaccharide found in the cellular capsule of Salmonelle bacteria.

  13. Multiple ionization of neon by soft x-rays at ultrahigh intensity

    NASA Astrophysics Data System (ADS)

    Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.

    2013-08-01

    At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.

  14. VIV analysis of pipelines under complex span conditions

    NASA Astrophysics Data System (ADS)

    Wang, James; Steven Wang, F.; Duan, Gang; Jukes, Paul

    2009-06-01

    Spans occur when a pipeline is laid on a rough undulating seabed or when upheaval buckling occurs due to constrained thermal expansion. This not only results in static and dynamic loads on the flowline at span sections, but also generates vortex induced vibration (VIV), which can lead to fatigue issues. The phenomenon, if not predicted and controlled properly, will negatively affect pipeline integrity, leading to expensive remediation and intervention work. Span analysis can be complicated by: long span lengths, a large number of spans caused by a rough seabed, and multi-span interactions. In addition, the complexity can be more onerous and challenging when soil uncertainty, concrete degradation and unknown residual lay tension are considered in the analysis. This paper describes the latest developments and a ‘state-of-the-art’ finite element analysis program that has been developed to simulate the span response of a flowline under complex boundary and loading conditions. Both VIV and direct wave loading are captured in the analysis and the results are sequentially used for the ultimate limit state (ULS) check and fatigue life calculation.

  15. Microbial burden prediction model for unmanned planetary spacecraft

    NASA Technical Reports Server (NTRS)

    Hoffman, A. R.; Winterburn, D. A.

    1972-01-01

    The technical development of a computer program for predicting microbial burden on unmanned planetary spacecraft is outlined. The discussion includes the derivation of the basic analytical equations, the selection of a method for handling several random variables, the macrologic of the computer programs and the validation and verification of the model. The prediction model was developed to (1) supplement the biological assays of a spacecraft by simulating the microbial accretion during periods when assays are not taken; (2) minimize the necessity for a large number of microbiological assays; and (3) predict the microbial loading on a lander immediately prior to sterilization and other non-lander equipment prior to launch. It is shown that these purposes not only were achieved but also that the prediction results compare favorably to the estimates derived from the direct assays. The computer program can be applied not only as a prediction instrument but also as a management and control tool. The basic logic of the model is shown to have possible applicability to other sequential flow processes, such as food processing.

  16. Congestion detection of pedestrians using the velocity entropy: A case study of Love Parade 2010 disaster

    NASA Astrophysics Data System (ADS)

    Huang, Lida; Chen, Tao; Wang, Yan; Yuan, Hongyong

    2015-12-01

    Gatherings of large human crowds often result in crowd disasters such as the Love Parade Disaster in Duisburg, Germany on July 24, 2010. To avoid these tragedies, video surveillance and early warning are becoming more and more significant. In this paper, the velocity entropy is first defined as the criterion for congestion detection, which represents the motion magnitude distribution and the motion direction distribution simultaneously. Then the detection method is verified by the simulation data based on AnyLogic software. To test the generalization performance of this method, video recordings of a real-world case, the Love Parade disaster, are also used in the experiments. The velocity histograms of the foreground object in the videos are extracted by the Gaussian Mixture Model (GMM) and optical flow computation. With a sequential change-point detection algorithm, the velocity entropy can be applied to detect congestions of the Love Parade festival. It turned out that without recognizing and tracking individual pedestrian, our method can detect abnormal crowd behaviors in real-time.

  17. Nano-optical conveyor belt with waveguide-coupled excitation.

    PubMed

    Wang, Guanghui; Ying, Zhoufeng; Ho, Ho-pui; Huang, Ying; Zou, Ningmu; Zhang, Xuping

    2016-02-01

    We propose a plasmonic nano-optical conveyor belt for peristaltic transport of nano-particles. Instead of illumination from the top, waveguide-coupled excitation is used for trapping particles with a higher degree of precision and flexibility. Graded nano-rods with individual dimensions coded to have resonance at specific wavelengths are incorporated along the waveguide in order to produce spatially addressable hot spots. Consequently, by switching the excitation wavelength sequentially, particles can be transported to adjacent optical traps along the waveguide. The feasibility of this design is analyzed using three-dimensional finite-difference time-domain and Maxwell stress tensor methods. Simulation results show that this system is capable of exciting addressable traps and moving particles in a peristaltic fashion with tens of nanometers resolution. It is the first, to the best of our knowledge, report about a nano-optical conveyor belt with waveguide-coupled excitation, which is very important for scalability and on-chip integration. The proposed approach offers a new design direction for integrated waveguide-based optical manipulation devices and its application in large scale lab-on-a-chip integration.

  18. Clinical results of computerized tomography-based simulation with laser patient marking.

    PubMed

    Ragan, D P; Forman, J D; He, T; Mesina, C F

    1996-02-01

    Accuracy of a patient treatment portal marking device and computerized tomography (CT) simulation have been clinically tested. A CT-based simulator has been assembled based on a commercial CT scanner. This includes visualization software and a computer-controlled laser drawing device. This laser drawing device is used to transfer the setup, central axis, and/or radiation portals from the CT simulator to the patient for appropriate patient skin marking. A protocol for clinical testing is reported. Twenty-five prospectively, sequentially accessioned patients have been analyzed. The simulation process can be completed in an average time of 62 min. Under many cases, the treatment portals can be designed and the patient marked in one session. Mechanical accuracy of the system was found to be within +/- 1mm. The portal projection accuracy in clinical cases is observed to be better than +/- 1.2 mm. Operating costs are equivalent to the conventional simulation process it replaces. Computed tomography simulation is a clinical accurate substitute for conventional simulation when used with an appropriate patient marking system and digitally reconstructed radiographs. Personnel time spent in CT simulation is equivalent to time in conventional simulation.

  19. Realistic page-turning of electronic books

    NASA Astrophysics Data System (ADS)

    Fan, Chaoran; Li, Haisheng; Bai, Yannan

    2014-01-01

    The booming electronic books (e-books), as an extension to the paper book, are popular with readers. Recently, many efforts are put into the realistic page-turning simulation o f e-book to improve its reading experience. This paper presents a new 3D page-turning simulation approach, which employs piecewise time-dependent cylindrical surfaces to describe the turning page and constructs smooth transition method between time-dependent cylinders. The page-turning animation is produced by sequentially mapping the turning page into the cylinders with different radii and positions. Compared to the previous approaches, our method is able to imitate various effects efficiently and obtains more natural animation of turning page.

  20. A formal language for the specification and verification of synchronous and asynchronous circuits

    NASA Technical Reports Server (NTRS)

    Russinoff, David M.

    1993-01-01

    A formal hardware description language for the intended application of verifiable asynchronous communication is described. The language is developed within the logical framework of the Nqthm system of Boyer and Moore and is based on the event-driven behavioral model of VHDL, including the basic VHDL signal propagation mechanisms, the notion of simulation deltas, and the VHDL simulation cycle. A core subset of the language corresponds closely with a subset of VHDL and is adequate for the realistic gate-level modeling of both combinational and sequential circuits. Various extensions to this subset provide means for convenient expression of behavioral circuit specifications.

Top