Learning about Locomotion Patterns from Visualizations: Effects of Presentation Format and Realism
ERIC Educational Resources Information Center
Imhof, Birgit; Scheiter, Katharina; Gerjets, Peter
2011-01-01
The rapid development of computer graphics technology has made possible an easy integration of dynamic visualizations into computer-based learning environments. This study examines the relative effectiveness of dynamic visualizations, compared either to sequentially or simultaneously presented static visualizations. Moreover, the degree of realism…
High-speed multiple sequence alignment on a reconfigurable platform.
Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf
2006-01-01
Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.
Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi
2011-06-01
To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Rapid Decisions From Experience
Zeigenfuse, Matthew D.; Pleskac, Timothy J.; Liu, Taosheng
2014-01-01
In many everyday decisions, people quickly integrate noisy samples of information to form a preference among alternatives that offer uncertain rewards. Here, we investigated this decision process using the Flash Gambling Task (FGT), in which participants made a series of choices between a certain payoff and an uncertain alternative that produced a normal distribution of payoffs. For each choice, participants experienced the distribution of payoffs via rapid samples updated every 50 ms. We show that people can make these rapid decisions from experience and that the decision process is consistent with a sequential sampling process. Results also reveal a dissociation between these preferential decisions and equivalent perceptual decisions where participants had to determine which alternatives contained more dots on average. To account for this dissociation, we developed a sequential sampling rank-dependent utility model, which showed that participants in the FGT attended more to larger potential payoffs than participants in the perceptual task despite being given equivalent information. We discuss the implications of these findings in terms of computational models of preferential choice and a more complete understanding of experience-based decision making. PMID:24549141
NASA Technical Reports Server (NTRS)
Moore, Judith G.
1992-01-01
NMSB Movie computer program displays large sets of data (more than million individual values). Presentation dynamic, rapidly displaying sequential image "frames" in main "movie" window. Any sequence of two-dimensional sets of data scaled between 0 and 255 (1-byte resolution) displayed as movie. Time- or slice-wise progression of data illustrated. Originally written to present data from three-dimensional ultrasonic scans of damaged aerospace composite materials, illustrates data acquired by thermal-analysis systems measuring rates of heating and cooling of various materials. Developed on Macintosh IIx computer with 8-bit color display adapter and 8 megabytes of memory using Symantec Corporation's Think C, version 4.0.
Optimal Sequential Rules for Computer-Based Instruction.
ERIC Educational Resources Information Center
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
PARVMEC: An Efficient, Scalable Implementation of the Variational Moments Equilibrium Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seal, Sudip K; Hirshman, Steven Paul; Wingen, Andreas
The ability to sustain magnetically confined plasma in a state of stable equilibrium is crucial for optimal and cost-effective operations of fusion devices like tokamaks and stellarators. The Variational Moments Equilibrium Code (VMEC) is the de-facto serial application used by fusion scientists to compute magnetohydrodynamics (MHD) equilibria and study the physics of three dimensional plasmas in confined configurations. Modern fusion energy experiments have larger system scales with more interactive experimental workflows, both demanding faster analysis turnaround times on computational workloads that are stressing the capabilities of sequential VMEC. In this paper, we present PARVMEC, an efficient, parallel version of itsmore » sequential counterpart, capable of scaling to thousands of processors on distributed memory machines. PARVMEC is a non-linear code, with multiple numerical physics modules, each with its own computational complexity. A detailed speedup analysis supported by scaling results on 1,024 cores of a Cray XC30 supercomputer is presented. Depending on the mode of PARVMEC execution, speedup improvements of one to two orders of magnitude are reported. PARVMEC equips fusion scientists for the first time with a state-of-theart capability for rapid, high fidelity analyses of magnetically confined plasmas at unprecedented scales.« less
Rapidly Progressive Maxillary Atelectasis.
Elkhatib, Ahmad; McMullen, Kyle; Hachem, Ralph Abi; Carrau, Ricardo L; Mastros, Nicholas
2017-07-01
Report of a patient with rapidly progressive maxillary atelectasis documented by sequential imaging. A 51-year-old man, presented with left periorbital and retro-orbital pain associated with left nasal obstruction. An initial computed tomographic (CT) scan of the paranasal sinuses failed to reveal any significant abnormality. A subsequent CT scan, indicated for recurrence of symptoms 11 months later, showed significant maxillary atelectasis. An uncinectomy, maxillary antrostomy, and anterior ethmoidectomy resulted in a complete resolution of the symptoms. Chronic maxillary atelectasis is most commonly a consequence of chronic rhinosinusitis. All previous reports have indicated a chronic process but lacked documentation of the course of the disease. This report documents a patient of rapidly progressive chronic maxillary atelectasis with CT scans that demonstrate changes in the maxillary sinus (from normal to atelectatic) within 11 months.
Performance review using sequential sampling and a practice computer.
Difford, F
1988-06-01
The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.
Analysis of filter tuning techniques for sequential orbit determination
NASA Technical Reports Server (NTRS)
Lee, T.; Yee, C.; Oza, D.
1995-01-01
This paper examines filter tuning techniques for a sequential orbit determination (OD) covariance analysis. Recently, there has been a renewed interest in sequential OD, primarily due to the successful flight qualification of the Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) using Doppler data extracted onboard the Extreme Ultraviolet Explorer (EUVE) spacecraft. TONS computes highly accurate orbit solutions onboard the spacecraft in realtime using a sequential filter. As the result of the successful TONS-EUVE flight qualification experiment, the Earth Observing System (EOS) AM-1 Project has selected TONS as the prime navigation system. In addition, sequential OD methods can be used successfully for ground OD. Whether data are processed onboard or on the ground, a sequential OD procedure is generally favored over a batch technique when a realtime automated OD system is desired. Recently, OD covariance analyses were performed for the TONS-EUVE and TONS-EOS missions using the sequential processing options of the Orbit Determination Error Analysis System (ODEAS). ODEAS is the primary covariance analysis system used by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The results of these analyses revealed a high sensitivity of the OD solutions to the state process noise filter tuning parameters. The covariance analysis results show that the state estimate error contributions from measurement-related error sources, especially those due to the random noise and satellite-to-satellite ionospheric refraction correction errors, increase rapidly as the state process noise increases. These results prompted an in-depth investigation of the role of the filter tuning parameters in sequential OD covariance analysis. This paper analyzes how the spacecraft state estimate errors due to dynamic and measurement-related error sources are affected by the process noise level used. This information is then used to establish guidelines for determining optimal filter tuning parameters in a given sequential OD scenario for both covariance analysis and actual OD. Comparisons are also made with corresponding definitive OD results available from the TONS-EUVE analysis.
1990-07-01
sleep to favor one set of material in preference to others. This could apply to skill learning as well as declarative memory with considerable potential...not be advantageous for an organism to store a large number of specific memories , specific records of the many experiences of each day of its lifetime...be stored in real time in a sequential representation, as on a serial computer tape. Access to this "episodic" memory would be by serial order, by time
Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.
Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq
2016-01-01
This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.
NASA Astrophysics Data System (ADS)
Gallagher, C. B.; Ferraro, A.
2018-05-01
A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.
Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi
2016-02-01
The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.
2015-01-01
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888
EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.
Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah
2017-12-01
To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Vanderberg, J. D.; Woodbury, N. W.
1974-01-01
A method for rapidly examining the probable applicability of weight estimating formulae to a specific aerospace vehicle design is presented. The Multivariate Analysis Retrieval and Storage System (MARS) is comprised of three computer programs which sequentially operate on the weight and geometry characteristics of past aerospace vehicles designs. Weight and geometric characteristics are stored in a set of data bases which are fully computerized. Additional data bases are readily added to the MARS system and/or the existing data bases may be easily expanded to include additional vehicles or vehicle characteristics.
The simultaneous quantitation of ten amino acids in soil extracts by mass fragmentography
NASA Technical Reports Server (NTRS)
Pereira, W. E.; Hoyano, Y.; Reynolds, W. E.; Summons, R. E.; Duffield, A. M.
1972-01-01
A specific and sensitive method for the identification and simultaneous quantitation by mass fragmentography of ten of the amino acids present in soil was developed. The technique uses a computer driven quadrupole mass spectrometer and a commercial preparation of deuterated amino acids is used as internal standards for purposes of quantitation. The results obtained are comparable with those from an amino acid analyzer. In the quadrupole mass spectrometer-computer system up to 25 pre-selected ions may be monitored sequentially. This allows a maximum of 12 different amino acids (one specific ion in each of the undeuterated and deuterated amino acid spectra) to be quantitated. The method is relatively rapid (analysis time of approximately one hour) and is capable of the quantitation of nanogram quantities of amino acids.
Space-Time Fluid-Structure Interaction Computation of Flapping-Wing Aerodynamics
2013-12-01
SST-VMST." The structural mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is...mechanics computations are based on the Kirchhoff -Love shell model. We use a sequential coupling technique, which is ap- plicable to some classes of FSI...we use the ST-VMS method in combination with the ST-SUPS method. The structural mechanics computations are mostly based on the Kirchhoff –Love shell
Strube-Bloss, Martin F.; Herrera-Valdez, Marco A.; Smith, Brian H.
2012-01-01
Neural representations of odors are subject to computations that involve sequentially convergent and divergent anatomical connections across different areas of the brains in both mammals and insects. Furthermore, in both mammals and insects higher order brain areas are connected via feedback connections. In order to understand the transformations and interactions that this connectivity make possible, an ideal experiment would compare neural responses across different, sequential processing levels. Here we present results of recordings from a first order olfactory neuropile – the antennal lobe (AL) – and a higher order multimodal integration and learning center – the mushroom body (MB) – in the honey bee brain. We recorded projection neurons (PN) of the AL and extrinsic neurons (EN) of the MB, which provide the outputs from the two neuropils. Recordings at each level were made in different animals in some experiments and simultaneously in the same animal in others. We presented two odors and their mixture to compare odor response dynamics as well as classification speed and accuracy at each neural processing level. Surprisingly, the EN ensemble significantly starts separating odor stimuli rapidly and before the PN ensemble has reached significant separation. Furthermore the EN ensemble at the MB output reaches a maximum separation of odors between 84–120 ms after odor onset, which is 26 to 133 ms faster than the maximum separation at the AL output ensemble two synapses earlier in processing. It is likely that a subset of very fast PNs, which respond before the ENs, may initiate the rapid EN ensemble response. We suggest therefore that the timing of the EN ensemble activity would allow retroactive integration of its signal into the ongoing computation of the AL via centrifugal feedback. PMID:23209711
Einstein, Andrew J.; Wolff, Steven D.; Manheimer, Eric D.; Thompson, James; Terry, Sylvia; Uretsky, Seth; Pilip, Adalbert; Peters, M. Robert
2009-01-01
Radiation dose from coronary computed tomography angiography may be reduced using a sequential scanning protocol rather than a conventional helical scanning protocol. Here we compare radiation dose and image quality from coronary computed tomography angiography in a single center between an initial period during which helical scanning with electrocardiographically-controlled tube current modulation was used for all patients (n=138) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=261). Using the sequential-if-appropriate strategy, sequential scanning was employed in 86.2% of patients. Compared to the helical-only strategy, this strategy was associated with a 65.1% dose reduction (mean dose-length product of 305.2 vs. 875.1 and mean effective dose of 14.9 mSv vs. 5.2 mSv, respectively), with no significant change in overall image quality, step artifacts, motion artifacts, or perceived image noise. For the 225 patients undergoing sequential scanning, the dose-length product was 201.9 ± 90.0 mGy·cm, while for patients undergoing helical scanning under either strategy, the dose-length product was 890.9 ± 293.3 mGy·cm (p<0.0001), corresponding to mean effective doses of 3.4 mSv and 15.1 mSv, respectively, a 77.5% reduction. Image quality was significantly greater for the sequential studies, reflecting the poorer image quality in patients undergoing helical scanning in the sequential-if-appropriate strategy. In conclusion, a sequential-if-appropriate diagnostic strategy reduces dose markedly compared to a helical-only strategy, with no significant difference in image quality. PMID:19892048
Manheimer, Eric D.; Peters, M. Robert; Wolff, Steven D.; Qureshi, Mehreen A.; Atluri, Prashanth; Pearson, Gregory D.N.; Einstein, Andrew J.
2011-01-01
Triple-rule-out computed tomography angiography (TRO CTA), performed to evaluate the coronary arteries, pulmonary arteries, and thoracic aorta, has been associated with high radiation exposure. Utilization of sequential scanning for coronary computed tomography angiography (CCTA) reduces radiation dose. The application of sequential scanning to TRO CTA is much less well defined. We analyzed radiation dose and image quality from TRO CTA performed in a single outpatient center, comparing scans from a period during which helical scanning with electrocardiographically controlled tube current modulation was used for all patients (n=35) and after adoption of a strategy incorporating sequential scanning whenever appropriate (n=35). Sequential scanning was able to be employed in 86% of cases. The sequential-if-appropriate strategy, compared to the helical-only strategy, was associated with a 61.6% dose decrease (mean dose-length product [DLP] of 439 mGy×cm vs 1144 mGy×cm and mean effective dose of 7.5 mSv vs 19.4 mSv, respectively, p<0.0001). Similarly, there was a 71.5% dose reduction among 30 patients scanned with the sequential protocol compared to 40 patients scanned with the helical protocol under either strategy (326 mGy×cm vs 1141 mGy×cm and 5.5 mSv vs 19.4 mSv, respectively, p<0.0001). Although image quality did not differ between strategies, there was a non-statistically significant trend towards better quality in the sequential protocol compared to the helical protocol. In conclusion, approaching TRO CTA with a diagnostic strategy of sequential scanning as appropriate offers a marked reduction in radiation dose while maintaining image quality. PMID:21306693
SIMS: A Hybrid Method for Rapid Conformational Analysis
Gipson, Bryant; Moll, Mark; Kavraki, Lydia E.
2013-01-01
Proteins are at the root of many biological functions, often performing complex tasks as the result of large changes in their structure. Describing the exact details of these conformational changes, however, remains a central challenge for computational biology due the enormous computational requirements of the problem. This has engendered the development of a rich variety of useful methods designed to answer specific questions at different levels of spatial, temporal, and energetic resolution. These methods fall largely into two classes: physically accurate, but computationally demanding methods and fast, approximate methods. We introduce here a new hybrid modeling tool, the Structured Intuitive Move Selector (sims), designed to bridge the divide between these two classes, while allowing the benefits of both to be seamlessly integrated into a single framework. This is achieved by applying a modern motion planning algorithm, borrowed from the field of robotics, in tandem with a well-established protein modeling library. sims can combine precise energy calculations with approximate or specialized conformational sampling routines to produce rapid, yet accurate, analysis of the large-scale conformational variability of protein systems. Several key advancements are shown, including the abstract use of generically defined moves (conformational sampling methods) and an expansive probabilistic conformational exploration. We present three example problems that sims is applied to and demonstrate a rapid solution for each. These include the automatic determination of “active” residues for the hinge-based system Cyanovirin-N, exploring conformational changes involving long-range coordinated motion between non-sequential residues in Ribose-Binding Protein, and the rapid discovery of a transient conformational state of Maltose-Binding Protein, previously only determined by Molecular Dynamics. For all cases we provide energetic validations using well-established energy fields, demonstrating this framework as a fast and accurate tool for the analysis of a wide range of protein flexibility problems. PMID:23935893
Joint water-fat separation and deblurring for spiral imaging.
Wang, Dinghui; Zwart, Nicholas R; Pipe, James G
2018-06-01
Most previous approaches to spiral Dixon water-fat imaging perform the water-fat separation and deblurring sequentially based on the assumption that the phase accumulation and blurring as a result of off-resonance are separable. This condition can easily be violated in regions where the B 0 inhomogeneity varies rapidly. The goal of this work is to present a novel joint water-fat separation and deblurring method for spiral imaging. The proposed approach is based on a more accurate signal model that takes into account the phase accumulation and blurring simultaneously. A conjugate gradient method is used in the image domain to reconstruct the deblurred water and fat iteratively. Spatially varying convolutions with a local convergence criterion are used to reduce the computational demand. Both simulation and high-resolution brain imaging have demonstrated that the proposed joint method consistently improves the quality of reconstructed water and fat images compared with the sequential approach, especially in regions where the field inhomogeneity changes rapidly in space. The loss of signal-to-noise-ratio as a result of deblurring is minor at optimal echo times. High-quality water-fat spiral imaging can be achieved with the proposed joint approach, provided that an accurate field map of B 0 inhomogeneity is available. Magn Reson Med 79:3218-3228, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Yang, Qiang; Ma, Yanling; Zhao, Yongxue; She, Zhennan; Wang, Long; Li, Jie; Wang, Chunling; Deng, Yihui
2013-01-01
Background Sequential low-dose chemotherapy has received great attention for its unique advantages in attenuating multidrug resistance of tumor cells. Nevertheless, it runs the risk of producing new problems associated with the accelerated blood clearance phenomenon, especially with multiple injections of PEGylated liposomes. Methods Liposomes were labeled with fluorescent phospholipids of 1,2-dipalmitoyl-snglycero-3-phosphoethanolamine-N-(7-nitro-2-1,3-benzoxadiazol-4-yl) and epirubicin (EPI). The pharmacokinetics profile and biodistribution of the drug and liposome carrier following multiple injections were determined. Meanwhile, the antitumor effect of sequential low-dose chemotherapy was tested. To clarify this unexpected phenomenon, the production of polyethylene glycol (PEG)-specific immunoglobulin M (IgM), drug release, and residual complement activity experiments were conducted in serum. Results The first or sequential injections of PEGylated liposomes within a certain dose range induced the rapid clearance of subsequently injected PEGylated liposomal EPI. Of note, the clearance of EPI was two- to three-fold faster than the liposome itself, and a large amount of EPI was released from liposomes in the first 30 minutes in a complement-activation, direct-dependent manner. The therapeutic efficacy of liposomal EPI following 10 days of sequential injections in S180 tumor-bearing mice of 0.75 mg EPI/kg body weight was almost completely abolished between the sixth and tenth day of the sequential injections, even although the subsequently injected doses were doubled. The level of PEG-specific IgM in the blood increased rapidly, with a larger amount of complement being activated while the concentration of EPI in blood and tumor tissue was significantly reduced. Conclusion Our investigation implied that the accelerated blood clearance phenomenon and its accompanying rapid leakage and clearance of drug following sequential low-dose injections may reverse the unique pharmacokinetic–toxicity profile of liposomes which deserved our attention. Therefore, a more reasonable treatment regime should be selected to lessen or even eliminate this phenomenon. PMID:23576868
A comparison of sequential and spiral scanning techniques in brain CT.
Pace, Ivana; Zarb, Francis
2015-01-01
To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).
Ichikawa, Shota; Kamishima, Tamotsu; Sutherland, Kenneth; Fukae, Jun; Katayama, Kou; Aoki, Yuko; Okubo, Takanobu; Okino, Taichi; Kaneda, Takahiko; Takagi, Satoshi; Tanimura, Kazuhide
2017-10-01
We have developed a refined computer-based method to detect joint space narrowing (JSN) progression with the joint space narrowing progression index (JSNPI) by superimposing sequential hand radiographs. The purpose of this study is to assess the validity of a computer-based method using images obtained from multiple institutions in rheumatoid arthritis (RA) patients. Sequential hand radiographs of 42 patients (37 females and 5 males) with RA from two institutions were analyzed by a computer-based method and visual scoring systems as a standard of reference. The JSNPI above the smallest detectable difference (SDD) defined JSN progression on the joint level. The sensitivity and specificity of the computer-based method for JSN progression was calculated using the SDD and a receiver operating characteristic (ROC) curve. Out of 314 metacarpophalangeal joints, 34 joints progressed based on the SDD, while 11 joints widened. Twenty-one joints progressed in the computer-based method, 11 joints in the scoring systems, and 13 joints in both methods. Based on the SDD, we found lower sensitivity and higher specificity with 54.2 and 92.8%, respectively. At the most discriminant cutoff point according to the ROC curve, the sensitivity and specificity was 70.8 and 81.7%, respectively. The proposed computer-based method provides quantitative measurement of JSN progression using sequential hand radiographs and may be a useful tool in follow-up assessment of joint damage in RA patients.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Efficient computation of hashes
NASA Astrophysics Data System (ADS)
Lopes, Raul H. C.; Franqueira, Virginia N. L.; Hobson, Peter R.
2014-06-01
The sequential computation of hashes at the core of many distributed storage systems and found, for example, in grid services can hinder efficiency in service quality and even pose security challenges that can only be addressed by the use of parallel hash tree modes. The main contributions of this paper are, first, the identification of several efficiency and security challenges posed by the use of sequential hash computation based on the Merkle-Damgard engine. In addition, alternatives for the parallel computation of hash trees are discussed, and a prototype for a new parallel implementation of the Keccak function, the SHA-3 winner, is introduced.
Sequential microfluidic droplet processing for rapid DNA extraction.
Pan, Xiaoyan; Zeng, Shaojiang; Zhang, Qingquan; Lin, Bingcheng; Qin, Jianhua
2011-11-01
This work describes a novel droplet-based microfluidic device, which enables sequential droplet processing for rapid DNA extraction. The microdevice consists of a droplet generation unit, two reagent addition units and three droplet splitting units. The loading/washing/elution steps required for DNA extraction were carried out by sequential microfluidic droplet processing. The movement of superparamagnetic beads, which were used as extraction supports, was controlled with magnetic field. The microdevice could generate about 100 droplets per min, and it took about 1 min for each droplet to perform the whole extraction process. The extraction efficiency was measured to be 46% for λ-DNA, and the extracted DNA could be used in subsequent genetic analysis such as PCR, demonstrating the potential of the device for fast DNA extraction. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Test Generation for Highly Sequential Circuits
1989-08-01
Sequential CircuitsI Abhijit Ghosh, Srinivas Devadas , and A. Richard Newton Abstract We address the problem of generating test sequences for stuck-at...Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720. Devadas : Department of Electrical Engineering and Computer...attn1 b ~een propagatedl to ltne nnext state lites aloine. then we obtain tine fnalty Is as bit. valunes is called A miniteri state. Iti genecral. a
Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design
ERIC Educational Resources Information Center
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff
2016-01-01
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
Web-based segmentation and display of three-dimensional radiologic image data.
Silverstein, J; Rubenstein, J; Millman, A; Panko, W
1998-01-01
In many clinical circumstances, viewing sequential radiological image data as three-dimensional models is proving beneficial. However, designing customized computer-generated radiological models is beyond the scope of most physicians, due to specialized hardware and software requirements. We have created a simple method for Internet users to remotely construct and locally display three-dimensional radiological models using only a standard web browser. Rapid model construction is achieved by distributing the hardware intensive steps to a remote server. Once created, the model is automatically displayed on the requesting browser and is accessible to multiple geographically distributed users. Implementation of our server software on large scale systems could be of great service to the worldwide medical community.
Multiuser signal detection using sequential decoding
NASA Astrophysics Data System (ADS)
Xie, Zhenhua; Rushforth, Craig K.; Short, Robert T.
1990-05-01
The application of sequential decoding to the detection of data transmitted over the additive white Gaussian noise channel by K asynchronous transmitters using direct-sequence spread-spectrum multiple access is considered. A modification of Fano's (1963) sequential-decoding metric, allowing the messages from a given user to be safely decoded if its Eb/N0 exceeds -1.6 dB, is presented. Computer simulation is used to evaluate the performance of a sequential decoder that uses this metric in conjunction with the stack algorithm. In many circumstances, the sequential decoder achieves results comparable to those obtained using the much more complicated optimal receiver.
Generating finite cyclic and dihedral groups using sequential insertion systems with interactions
NASA Astrophysics Data System (ADS)
Fong, Wan Heng; Sarmin, Nor Haniza; Turaev, Sherzod; Yosman, Ahmad Firdaus
2017-04-01
The operation of insertion has been studied extensively throughout the years for its impact in many areas of theoretical computer science such as DNA computing. First introduced as a generalization of the concatenation operation, many variants of insertion have been introduced, each with their own computational properties. In this paper, we introduce a new variant that enables the generation of some special types of groups called sequential insertion systems with interactions. We show that these new systems are able to generate all finite cyclic and dihedral groups.
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
Exploring the sequential lineup advantage using WITNESS.
Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A
2010-12-01
Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
Computational Cognitive Neuroscience Modeling of Sequential Skill Learning
2016-09-21
101 EAST 27TH STREET STE 4308 AUSTIN , TX 78712 09/21/2016 Final Report DISTRIBUTION A: Distribution approved for public release. Air Force Research ...5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) The University of Texas at Austin 108 E Dean Keeton Stop A8000 Austin , TX ...AFRL-AFOSR-VA-TR-2016-0320 Computational Cognitive Neuroscience Modeling of Sequential Skill Learning David Schnyer UNIVERSITY OF TEXAS AT AUSTIN
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines
1989-09-01
Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Srinivas Devadas and Kurt Keutzer F ( Abstract In this...Projects Agency under contract number N00014-87-K-0825. Author Information Devadas : Department of Electrical Engineering and Computer Science, Room 36...MA 02139; (617) 253-0292. 0 * Boolean Minimization and Algebraic Factorization Procedures for Fully Testable Sequential Machines Siivas Devadas
Kingsley, I.S.
1987-01-06
A process and apparatus are disclosed for the separation of complex mixtures of carbonaceous material by sequential elution with successively stronger solvents. In the process, a column containing glass beads is maintained in a fluidized state by a rapidly flowing stream of a weak solvent, and the sample is injected into this flowing stream such that a portion of the sample is dissolved therein and the remainder of the sample is precipitated therein and collected as a uniform deposit on the glass beads. Successively stronger solvents are then passed through the column to sequentially elute less soluble materials. 1 fig.
Vandelanotte, Corneel; De Bourdeaudhuij, Ilse; Sallis, James F; Spittaels, Heleen; Brug, Johannes
2005-04-01
Little evidence exists about the effectiveness of "interactive" computer-tailored interventions and about the combined effectiveness of tailored interventions on physical activity and diet. Furthermore, it is unknown whether they should be executed sequentially or simultaneously. The purpose of this study was to examine (a) the effectiveness of interactive computer-tailored interventions for increasing physical activity and decreasing fat intake and (b) which intervening mode, sequential or simultaneous, is most effective in behavior change. Participants (N = 771) were randomly assigned to receive (a) the physical activity and fat intake interventions simultaneously at baseline, (b) the physical activity intervention at baseline and the fat intake intervention 3 months later, (c) the fat intake intervention at baseline and the physical activity intervention 3 months later, or (d) a place in the control group. Six months postbaseline, the results showed that the tailored interventions produced significantly higher physical activity scores, F(2, 573) = 11.4, p < .001, and lower fat intake scores, F(2, 565) = 31.4, p < .001, in the experimental groups when compared to the control group. For both behaviors, the sequential and simultaneous intervening modes showed to be effective; however, for the fat intake intervention and for the participants who did not meet the recommendation in the physical activity intervention, the simultaneous mode appeared to work better than the sequential mode.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Hajati, Omid; Zarrabi, Khalil; Karimi, Reza; Hajati, Azadeh
2012-01-01
There is still controversy over the differences in the patency rates of the sequential and individual coronary artery bypass grafting (CABG) techniques. The purpose of this paper was to non-invasively evaluate hemodynamic parameters using complete 3D computational fluid dynamics (CFD) simulations of the sequential and the individual methods based on the patient-specific data extracted from computed tomography (CT) angiography. For CFD analysis, the geometric model of coronary arteries was reconstructed using an ECG-gated 64-detector row CT. Modeling the sequential and individual bypass grafting, this study simulates the flow from the aorta to the occluded posterior descending artery (PDA) and the posterior left ventricle (PLV) vessel with six coronary branches based on the physiologically measured inlet flow as the boundary condition. The maximum calculated wall shear stress (WSS) in the sequential and the individual models were estimated to be 35.1 N/m(2) and 36.5 N/m(2), respectively. Compared to the individual bypass method, the sequential graft has shown a higher velocity at the proximal segment and lower spatial wall shear stress gradient (SWSSG) due to the flow splitting caused by the side-to-side anastomosis. Simulated results combined with its surgical benefits including the requirement of shorter vein length and fewer anastomoses advocate the sequential method as a more favorable CABG method.
Evidence-Based Clinical Recommendations for the Administration of the Sequential Motion Rates Task
ERIC Educational Resources Information Center
Icht, Michal; Ben-David, Boaz M.
2018-01-01
The sequential motion rates (SMR) task, that involves rapid and accurate repetitions of a syllable sequence, /pataka/, is a commonly used evaluation tool for oro-motor abilities. Although the SMR is a well-known tool, some aspects of its administration protocol are unspecified. We address the following factors and their role in the SMR protocol:…
NASA Astrophysics Data System (ADS)
Chen, Xinjia; Lacy, Fred; Carriere, Patrick
2015-05-01
Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
On the Lulejian-I Combat Model
1976-08-01
possible initial massing of the attacking side’s resources, the model tries to represent in a game -theoretic context the adversary nature of the...sequential game , as outlined in [A]. In principle, it is necessary to run the combat simulation once for each possible set of sequentially chosen...sequential game , in which the evaluative portion of the model (i.e., the combat assessment) serves to compute intermediate and terminal payoffs for the
Computer-Based Instruction for TRIDENT FBM Training
1976-06-01
remote voice feedback to an operator. In this case it is possible to display text which represents the voice messages required during sequential ...provides two main services: (a) the preparation of missiles for sequential launching with self-guidance after launch, and (b) the coordination of...monitor- ing the status of the guidance system in each missile. FCS SWS coordina- tion consists of monitoring systems involved in sequential functions at
Buffer management for sequential decoding. [block erasure probability reduction
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
Sequential decoding has been found to be an efficient means of communicating at low undetected error rates from deep space probes, but erasure or computational overflow remains a significant problem. Erasure of a block occurs when the decoder has not finished decoding that block at the time that it must be output. By drawing upon analogies in computer time sharing, this paper develops a buffer-management strategy which reduces the decoder idle time to a negligible level, and therefore improves the erasure probability of a sequential decoder. For a decoder with a speed advantage of ten and a buffer size of ten blocks, operating at an erasure rate of .01, use of this buffer-management strategy reduces the erasure rate to less than .0001.
Challenges in reducing the computational time of QSTS simulations for distribution system analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deboever, Jeremiah; Zhang, Xiaochen; Reno, Matthew J.
The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10more » to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.« less
NASA Astrophysics Data System (ADS)
Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando
2017-08-01
Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.
Parallel algorithm for computation of second-order sequential best rotations
NASA Astrophysics Data System (ADS)
Redif, Soydan; Kasap, Server
2013-12-01
Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
NASA Astrophysics Data System (ADS)
Rizki, Permata Nur Miftahur; Lee, Heezin; Lee, Minsu; Oh, Sangyoon
2017-01-01
With the rapid advance of remote sensing technology, the amount of three-dimensional point-cloud data has increased extraordinarily, requiring faster processing in the construction of digital elevation models. There have been several attempts to accelerate the computation using parallel methods; however, little attention has been given to investigating different approaches for selecting the most suited parallel programming model for a given computing environment. We present our findings and insights identified by implementing three popular high-performance parallel approaches (message passing interface, MapReduce, and GPGPU) on time demanding but accurate kriging interpolation. The performances of the approaches are compared by varying the size of the grid and input data. In our empirical experiment, we demonstrate the significant acceleration by all three approaches compared to a C-implemented sequential-processing method. In addition, we also discuss the pros and cons of each method in terms of usability, complexity infrastructure, and platform limitation to give readers a better understanding of utilizing those parallel approaches for gridding purposes.
Modeling of a Sequential Two-Stage Combustor
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.
2005-01-01
A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Parallelization of Finite Element Analysis Codes Using Heterogeneous Distributed Computing
NASA Technical Reports Server (NTRS)
Ozguner, Fusun
1996-01-01
Performance gains in computer design are quickly consumed as users seek to analyze larger problems to a higher degree of accuracy. Innovative computational methods, such as parallel and distributed computing, seek to multiply the power of existing hardware technology to satisfy the computational demands of large applications. In the early stages of this project, experiments were performed using two large, coarse-grained applications, CSTEM and METCAN. These applications were parallelized on an Intel iPSC/860 hypercube. It was found that the overall speedup was very low, due to large, inherently sequential code segments present in the applications. The overall execution time T(sub par), of the application is dependent on these sequential segments. If these segments make up a significant fraction of the overall code, the application will have a poor speedup measure.
Computational time analysis of the numerical solution of 3D electrostatic Poisson's equation
NASA Astrophysics Data System (ADS)
Kamboh, Shakeel Ahmed; Labadin, Jane; Rigit, Andrew Ragai Henri; Ling, Tech Chaw; Amur, Khuda Bux; Chaudhary, Muhammad Tayyab
2015-05-01
3D Poisson's equation is solved numerically to simulate the electric potential in a prototype design of electrohydrodynamic (EHD) ion-drag micropump. Finite difference method (FDM) is employed to discretize the governing equation. The system of linear equations resulting from FDM is solved iteratively by using the sequential Jacobi (SJ) and sequential Gauss-Seidel (SGS) methods, simulation results are also compared to examine the difference between the results. The main objective was to analyze the computational time required by both the methods with respect to different grid sizes and parallelize the Jacobi method to reduce the computational time. In common, the SGS method is faster than the SJ method but the data parallelism of Jacobi method may produce good speedup over SGS method. In this study, the feasibility of using parallel Jacobi (PJ) method is attempted in relation to SGS method. MATLAB Parallel/Distributed computing environment is used and a parallel code for SJ method is implemented. It was found that for small grid size the SGS method remains dominant over SJ method and PJ method while for large grid size both the sequential methods may take nearly too much processing time to converge. Yet, the PJ method reduces computational time to some extent for large grid sizes.
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Correlated sequential tunneling in Tomonaga-Luttinger liquid quantum dots
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-02-01
We investigate tunneling through a quantum dot formed by two strong impurites in a spinless Tomonaga-Luttinger liquid. Upon employing a Markovian master equation approach, we compute the linear conductance due to sequential tunneling processes. Besides the previously used lowest-order Golden Rule rates describing uncorrelated sequential tunneling (UST) processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects are shown to dominate over UST. Focusing mainly on the temperature dependence of the conductance maximum, we discuss the relation of our results to previous theoretical and experimental results.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
ERIC Educational Resources Information Center
Bailey, Suzanne Powers; Jeffers, Marcia
Eighteen interrelated, sequential lesson plans and supporting materials for teaching computer literacy at the elementary and secondary levels are presented. The activities, intended to be infused into the regular curriculum, do not require the use of a computer. The introduction presents background information on computer literacy, suggests a…
How hierarchical is language use?
Frank, Stefan L.; Bod, Rens; Christiansen, Morten H.
2012-01-01
It is generally assumed that hierarchical phrase structure plays a central role in human language. However, considerations of simplicity and evolutionary continuity suggest that hierarchical structure should not be invoked too hastily. Indeed, recent neurophysiological, behavioural and computational studies show that sequential sentence structure has considerable explanatory power and that hierarchical processing is often not involved. In this paper, we review evidence from the recent literature supporting the hypothesis that sequential structure may be fundamental to the comprehension, production and acquisition of human language. Moreover, we provide a preliminary sketch outlining a non-hierarchical model of language use and discuss its implications and testable predictions. If linguistic phenomena can be explained by sequential rather than hierarchical structure, this will have considerable impact in a wide range of fields, such as linguistics, ethology, cognitive neuroscience, psychology and computer science. PMID:22977157
NASA Astrophysics Data System (ADS)
Lorentzen, Rolf J.; Stordal, Andreas S.; Hewitt, Neal
2017-05-01
Flowrate allocation in production wells is a complicated task, especially for multiphase flow combined with several reservoir zones and/or branches. The result depends heavily on the available production data, and the accuracy of these. In the application we show here, downhole pressure and temperature data are available, in addition to the total flowrates at the wellhead. The developed methodology inverts these observations to the fluid flowrates (oil, water and gas) that enters two production branches in a real full-scale producer. A major challenge is accurate estimation of flowrates during rapid variations in the well, e.g. due to choke adjustments. The Auxiliary Sequential Importance Resampling (ASIR) filter was developed to handle such challenges, by introducing an auxiliary step, where the particle weights are recomputed (second weighting step) based on how well the particles reproduce the observations. However, the ASIR filter suffers from large computational time when the number of unknown parameters increase. The Gaussian Mixture (GM) filter combines a linear update, with the particle filters ability to capture non-Gaussian behavior. This makes it possible to achieve good performance with fewer model evaluations. In this work we present a new filter which combines the ASIR filter and the Gaussian Mixture filter (denoted ASGM), and demonstrate improved estimation (compared to ASIR and GM filters) in cases with rapid parameter variations, while maintaining reasonable computational cost.
An Undergraduate Survey Course on Asynchronous Sequential Logic, Ladder Logic, and Fuzzy Logic
ERIC Educational Resources Information Center
Foster, D. L.
2012-01-01
For a basic foundation in computer engineering, universities traditionally teach synchronous sequential circuit design, using discrete gates or field programmable gate arrays, and a microcomputers course that includes basic I/O processing. These courses, though critical, expose students to only a small subset of tools. At co-op schools like…
PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.
MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S
2005-06-01
Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.
Barriga-Rivera, Alejandro; Morley, John W; Lovell, Nigel H; Suaning, Gregg J
2016-08-01
Researchers continue to develop visual prostheses towards safer and more efficacious systems. However limitations still exist in the number of stimulating channels that can be integrated. Therefore there is a need for spatial and time multiplexing techniques to provide improved performance of the current technology. In particular, bright and high-contrast visual scenes may require simultaneous activation of several electrodes. In this research, a 24-electrode array was suprachoroidally implanted in three normally-sighted cats. Multi-unit activity was recorded from the primary visual cortex. Four stimulation strategies were contrasted to provide activation of seven electrodes arranged hexagonally: simultaneous monopolar, sequential monopolar, sequential bipolar and hexapolar. Both monopolar configurations showed similar cortical activation maps. Hexapolar and sequential bipolar configurations activated a lower number of cortical channels. Overall, the return configuration played a more relevant role in cortical activation than time multiplexing and thus, rapid sequential stimulation may assist in reducing the number of channels required to activate large retinal areas.
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
The composite sequential clustering technique for analysis of multispectral scanner data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The clustering technique consists of two parts: (1) a sequential statistical clustering which is essentially a sequential variance analysis, and (2) a generalized K-means clustering. In this composite clustering technique, the output of (1) is a set of initial clusters which are input to (2) for further improvement by an iterative scheme. This unsupervised composite technique was employed for automatic classification of two sets of remote multispectral earth resource observations. The classification accuracy by the unsupervised technique is found to be comparable to that by traditional supervised maximum likelihood classification techniques. The mathematical algorithms for the composite sequential clustering program and a detailed computer program description with job setup are given.
The science of computing - Parallel computation
NASA Technical Reports Server (NTRS)
Denning, P. J.
1985-01-01
Although parallel computation architectures have been known for computers since the 1920s, it was only in the 1970s that microelectronic components technologies advanced to the point where it became feasible to incorporate multiple processors in one machine. Concommitantly, the development of algorithms for parallel processing also lagged due to hardware limitations. The speed of computing with solid-state chips is limited by gate switching delays. The physical limit implies that a 1 Gflop operational speed is the maximum for sequential processors. A computer recently introduced features a 'hypercube' architecture with 128 processors connected in networks at 5, 6 or 7 points per grid, depending on the design choice. Its computing speed rivals that of supercomputers, but at a fraction of the cost. The added speed with less hardware is due to parallel processing, which utilizes algorithms representing different parts of an equation that can be broken into simpler statements and processed simultaneously. Present, highly developed computer languages like FORTRAN, PASCAL, COBOL, etc., rely on sequential instructions. Thus, increased emphasis will now be directed at parallel processing algorithms to exploit the new architectures.
Ultrasonic Micro-Blades for the Rapid Extraction of Impact Tracks from Aerogel
NASA Technical Reports Server (NTRS)
Ishii, H. A.; Graham, G. A.; Kearsley, A. T.; Grant, P. G.; Snead, C. J.; Bradley, J. P.
2005-01-01
The science return of NASA's Stardust Mission with its valuable cargo of cometary debris hinges on the ability to efficiently extract particles from silica aerogel collectors. The current method for extracting cosmic dust impact tracks is a mature procedure involving sequential perforation of the aerogel with glass needles on computer controlled micromanipulators. This method is highly successful at removing well-defined aerogel fragments of reasonable optical clarity while causing minimal damage to the surrounding aerogel collector tile. Such a system will be adopted by the JSC Astromaterials Curation Facility in anticipation of Stardust s arrival in early 2006. In addition to Stardust, aerogel is a possible collector for future sample return missions and is used for capture of hypervelocity ejecta in high power laser experiments of interest to LLNL. Researchers will be eager to obtain Stardust samples for study as quickly as possible, and rapid extraction tools requiring little construction, training, or investment would be an attractive asset. To this end, we have experimented with micro-blades for the Stardust impact track extraction process. Our ultimate goal is a rapid extraction system in a clean electron beam environment, such as an SEM or dual-beam FIB, for in situ sample preparation, mounting and analysis.
Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E
2015-02-01
Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Wu, Sheng-Yi; Hou, Huei-Tse
2015-01-01
Cognitive styles play an important role in influencing the learning process, but to date no relevant study has been conducted using lag sequential analysis to assess knowledge construction learning patterns based on different cognitive styles in computer-supported collaborative learning activities in online collaborative discussions. This study…
Such Stuff as Habits Are Made on: A Reply to Cooper and Shallice (2006)
ERIC Educational Resources Information Center
Botvinick, Matthew M.; Plaut, David C.
2006-01-01
The representations and mechanisms guiding everyday routine sequential action remain incompletely understood. In recent work, the authors proposed a computational model of routine sequential behavior that took the form of a recurrent neural network (M. Botvinick & D. C. Plaut, 2004). Subsequently, R. P. Cooper and T. Shallice (2006) put forth a…
Effect of rapid thawing on the meat quality attributes of USDA Select beef strip loin steaks
USDA-ARS?s Scientific Manuscript database
The objective of this study was to determine the meat quality effects of rapidly thawing steaks in a water bath. Frozen beef strip loins (n = 24) were cut into steaks sequentially from the rib end and identified by anatomical location (anterior, middle, posterior) within the loin. Within location,...
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
Rapid earthquake detection through GPU-Based template matching
NASA Astrophysics Data System (ADS)
Mu, Dawei; Lee, En-Jui; Chen, Po
2017-12-01
The template-matching algorithm (TMA) has been widely adopted for improving the reliability of earthquake detection. The TMA is based on calculating the normalized cross-correlation coefficient (NCC) between a collection of selected template waveforms and the continuous waveform recordings of seismic instruments. In realistic applications, the computational cost of the TMA is much higher than that of traditional techniques. In this study, we provide an analysis of the TMA and show how the GPU architecture provides an almost ideal environment for accelerating the TMA and NCC-based pattern recognition algorithms in general. So far, our best-performing GPU code has achieved a speedup factor of more than 800 with respect to a common sequential CPU code. We demonstrate the performance of our GPU code using seismic waveform recordings from the ML 6.6 Meinong earthquake sequence in Taiwan.
Ah!Help: A generalized on-line help facility
NASA Technical Reports Server (NTRS)
Yu, Wong Nai; Mantooth, Charmiane; Soulahakil, Alex
1986-01-01
The idea behind the help facility discussed is relatively simple. It is made unique by the fact that it is written in Ada and uses aspects of the language which make information retrieval rapid and simple. Specifically, the DIRECT IO facility allows for random access into the help files. It is necessary to discuss the advantages of random access over sequential access. The mere fact that the program in written in Ada implies a saving in terms of lines of code. This introduces the possibility of eventually adapting the program to run at the microcomputer level, a major consideration . Additionally, since the program uses only standard Ada generics, it is portable to other systems. This is another aspect which must always be taken into consideration in writting any software package in the modern day world of computer programming.
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P
2018-03-01
Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.
Computer Applications and Technology 105.
ERIC Educational Resources Information Center
Manitoba Dept. of Education and Training, Winnipeg.
Designed to promote Manitoba students' familiarity with computer technology and their ability to interact with that technology, the Computer Applications and Technology 105 course is a one-credit course presented in 15 topical, non-sequential units that require 110-120 hours of instruction time. It has been developed with the assumption that each…
The possibility of application of spiral brain computed tomography to traumatic brain injury.
Lim, Daesung; Lee, Soo Hoon; Kim, Dong Hoon; Choi, Dae Seub; Hong, Hoon Pyo; Kang, Changwoo; Jeong, Jin Hee; Kim, Seong Chun; Kang, Tae-Sin
2014-09-01
The spiral computed tomography (CT) with the advantage of low radiation dose, shorter test time required, and its multidimensional reconstruction is accepted as an essential diagnostic method for evaluating the degree of injury in severe trauma patients and establishment of therapeutic plans. However, conventional sequential CT is preferred for the evaluation of traumatic brain injury (TBI) over spiral CT due to image noise and artifact. We aimed to compare the diagnostic power of spiral facial CT for TBI to that of conventional sequential brain CT. We evaluated retrospectively the images of 315 traumatized patients who underwent both brain CT and facial CT simultaneously. The hemorrhagic traumatic brain injuries such as epidural hemorrhage, subdural hemorrhage, subarachnoid hemorrhage, and contusional hemorrhage were evaluated in both images. Statistics were performed using Cohen's κ to compare the agreement between 2 imaging modalities and sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT to conventional sequential brain CT. Almost perfect agreement was noted regarding hemorrhagic traumatic brain injuries between spiral facial CT and conventional sequential brain CT (Cohen's κ coefficient, 0.912). To conventional sequential brain CT, sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT were 92.2%, 98.1%, 95.9%, and 96.3%, respectively. In TBI, the diagnostic power of spiral facial CT was equal to that of conventional sequential brain CT. Therefore, expanded spiral facial CT covering whole frontal lobe can be applied to evaluate TBI in the future. Copyright © 2014 Elsevier Inc. All rights reserved.
Boyde, A; Vesely, P; Gray, C; Jones, S J
1994-01-01
Chick and rat bone-derived cells were mounted in sealed coverslip-covered chambers; individual osteoclasts (but also osteoblasts) were selected and studied at 37 degrees C using three different types of high-speed scanning confocal microscopes: (1) A Noran Tandem Scanning Microscope (TSM) was used with a low light level, cooled CCD camera for image transfer to a Noran TN8502 frame store-based image analysing computer to make time lapse movie sequences using 0.1 s exposure periods, thus losing some of the advantage of the high frame rate of the TSM. Rapid focus adjustment using computer controlled piezo drivers permitted two or more focus planes to be imaged sequentially: thus (with additional light-source shuttering) the reflection confocal image could be alternated with the phase contrast image at a different focus. Individual cells were followed for up to 5 days, suggesting no significant irradiation problem. (2) Exceptional temporal and spatial resolution is available in video rate laser confocal scanning microscopes (VRCSLMs). We used the Noran Odyssey unitary beam VRCSLM with an argon ion laser at 488 nm and acousto-optic deflection (AOD) on the line axis: this instrument is truly and adjustably confocal in the reflection mode. (3) We also used the Lasertec 1LM11 line scan instrument, with an He-Ne laser at 633 nm, and AOD for the frame scan. We discuss the technical problems and merits of the different approaches. The VRCSLMs documented rapid, real-time oscillatory motion: all the methods used show rapid net movement of organelles within bone cells. The interference reflection mode gives particularly strong contrasts in confocal instruments. Phase contrast and other interference methods used in the microscopy of living cells can be used simultaneously in the TSM.
Nonlinear interferometry approach to photonic sequential logic
NASA Astrophysics Data System (ADS)
Mabuchi, Hideo
2011-10-01
Motivated by rapidly advancing capabilities for extensive nanoscale patterning of optical materials, I propose an approach to implementing photonic sequential logic that exploits circuit-scale phase coherence for efficient realizations of fundamental components such as a NAND-gate-with-fanout and a bistable latch. Kerr-nonlinear optical resonators are utilized in combination with interference effects to drive the binary logic. Quantum-optical input-output models are characterized numerically using design parameters that yield attojoule-scale energy separation between the latch states.
Peng, Shiyong; Liu, Suna; Zhang, Sai; Cao, Shengyu; Sun, Jiangtao
2015-10-16
Polyheteroaromatic compounds are potential optoelectronic conjugated materials due to their electro- and photochemical properties. Transition-metal-catalyzed multiple C-H activation and sequential oxidative annulation allows rapidly assembling of those compounds from readily available starting materials. A rhodium-catalyzed cascade oxidative annulation of β-enamino esters or 4-aminocoumarins with internal alkynes is described to access those compounds, featuring multiple C-H/N-H bond cleavages and sequential C-C/C-N bond formations in one pot.
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
A parallel computational model for GATE simulations.
Rannou, F R; Vega-Acevedo, N; El Bitar, Z
2013-12-01
GATE/Geant4 Monte Carlo simulations are computationally demanding applications, requiring thousands of processor hours to produce realistic results. The classical strategy of distributing the simulation of individual events does not apply efficiently for Positron Emission Tomography (PET) experiments, because it requires a centralized coincidence processing and large communication overheads. We propose a parallel computational model for GATE that handles event generation and coincidence processing in a simple and efficient way by decentralizing event generation and processing but maintaining a centralized event and time coordinator. The model is implemented with the inclusion of a new set of factory classes that can run the same executable in sequential or parallel mode. A Mann-Whitney test shows that the output produced by this parallel model in terms of number of tallies is equivalent (but not equal) to its sequential counterpart. Computational performance evaluation shows that the software is scalable and well balanced. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Unsteady, one-dimensional gas dynamics computations using a TVD type sequential solver
NASA Technical Reports Server (NTRS)
Thakur, Siddharth; Shyy, Wei
1992-01-01
The efficacy of high resolution convection schemes to resolve sharp gradient in unsteady, 1D flows is examined using the TVD concept based on a sequential solution algorithm. Two unsteady flow problems are considered which include the problem involving the interaction of the various waves in a shock tube with closed reflecting ends and the problem involving the unsteady gas dynamics in a tube with closed ends subject to an initial pressure perturbation. It is concluded that high accuracy convection schemes in a sequential solution framework are capable of resolving discontinuities in unsteady flows involving complex gas dynamics. However, a sufficient amount of dissipation is required to suppress oscillations near discontinuities in the sequential approach, which leads to smearing of the solution profiles.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.
Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko
2016-01-01
Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080
An algorithm for propagating the square-root covariance matrix in triangular form
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Choe, C. Y.
1976-01-01
A method for propagating the square root of the state error covariance matrix in lower triangular form is described. The algorithm can be combined with any triangular square-root measurement update algorithm to obtain a triangular square-root sequential estimation algorithm. The triangular square-root algorithm compares favorably with the conventional sequential estimation algorithm with regard to computation time.
Li, Guoqi; Deng, Lei; Wang, Dong; Wang, Wei; Zeng, Fei; Zhang, Ziyang; Li, Huanglong; Song, Sen; Pei, Jing; Shi, Luping
2016-01-01
Chunking refers to a phenomenon whereby individuals group items together when performing a memory task to improve the performance of sequential memory. In this work, we build a bio-plausible hierarchical chunking of sequential memory (HCSM) model to explain why such improvement happens. We address this issue by linking hierarchical chunking with synaptic plasticity and neuromorphic engineering. We uncover that a chunking mechanism reduces the requirements of synaptic plasticity since it allows applying synapses with narrow dynamic range and low precision to perform a memory task. We validate a hardware version of the model through simulation, based on measured memristor behavior with narrow dynamic range in neuromorphic circuits, which reveals how chunking works and what role it plays in encoding sequential memory. Our work deepens the understanding of sequential memory and enables incorporating it for the investigation of the brain-inspired computing on neuromorphic architecture. PMID:28066223
Sequential decision making in computational sustainability via adaptive submodularity
Krause, Andreas; Golovin, Daniel; Converse, Sarah J.
2015-01-01
Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.
Santra, Soumava; Andreana, Peter R
2011-04-01
A rapid, cascade reaction process has been developed to access biologically validated spiro-2,5-diketopiperazines. The facile and environmentally benign method capitalizes on commercially available starting reagents for a sequential Ugi/6-exo-trig aza-Michael reaction, water as a solvent, and microwave irradiation without any extraneous additives.
[Co-composting high moisture vegetable waste and flower waste in a sequential fed operation].
Zhang, Xiangfeng; Wang, Hongtao; Nie, Yongfeng
2003-11-01
Co-composting of high moisture vegetable wastes (celery and cabbage) and flower wastes (carnation) were studied in a sequential fed bed. The preliminary materials of composting were celery and carnation wastes. The sequential fed materials of composting were cabbage wastes and were fed every 4 days. Moisture content of mixture materials was between 60% and 70%. Composting was done in an aerobic static bed of composting based temperature feedback and control via aeration rate regulation. Aeration was ended when temperature of the pile was about 40 degrees C. Changes of composting of temperature, aeration rate, water content, organic matter, ash, pH, volume, NH4(+)-N, and NO3(-)-N were studied. Results show that co-composting of high moisture vegetable wastes and flower wastes, in a sequential fed aerobic static bed based temperature feedback and control via aeration rate regulation, can stabilize organic matter and removal water rapidly. The sequential fed operation are effective to overcome the difficult which traditional composting cannot applied successfully where high moisture vegetable wastes in more excess of flower wastes, such as Dianchi coastal.
Randleman, J Bradley; Su, Johnny P; Scarcelli, Giuliano
2017-06-01
To evaluate the biomechanical changes occurring after LASIK flap creation and rapid corneal cross-linking (CXL) measured with Brillouin light microscopy. Porcine eyes (n = 11) were evaluated by Brillouin light microscopy sequentially in the following order: virgin state, after LASIK flap creation, and after rapid CXL. Each eye served as its own control. Depth profile of the Brillouin frequency shift was computed to reveal the depth-dependent changes in corneal stiffness. There was a statistically significant reduction of Brillouin shift (reduced corneal stiffness) after LASIK flap creation compared to virgin corneas across total corneal thickness (-0.035 GHz, P = .0195) and within the anterior stromal region (-0.104 GHz, P = .0039). Changes in the central (-0.029 GHz, P = .0391) and posterior (-0.005 GHz, P = .99) stromal regions were not significant. There was a small increase in Brillouin shift after rapid cross-linking that was not statistically or clinically significant across total corneal thickness (0.006 GHz, P = .4688 for any specific stromal region; 0.002 to 0.009 GHz, P > .46 for all). LASIK flap creation significantly reduced Brillouin shift in the anterior third of the stroma in porcine eyes. Rapid corneal cross-linking had no significant effect on Brillouin shift after LASIK flap creation in porcine eyes. With further validation, non-contact, non-perturbative Brillouin microscopy could become a useful monitoring tool to evaluate the biomechanical impact of corneal refractive procedures and corneal cross-linking protocols. [J Refract Surg. 2017;33(6):408-414.]. Copyright 2017, SLACK Incorporated.
Computer retrieval of bibliographies using an editing program
Brethauer, G.E.; Brokaw, V.L.
1979-01-01
A simple program permits use of the text .editor 'qedx,' part of many computer systems, to input bibliographic entries and to retrieve specific entries which contain keywords of interest. Multiple keywords may be used sequentially to find specific entries.
Schneider, Francine; de Vries, Hein; van Osch, Liesbeth ADM; van Nierop, Peter WM; Kremers, Stef PJ
2012-01-01
Background Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). Objectives The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Methods Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Results Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Conclusion Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Trial Registration Dutch Trial Register NTR2168 PMID:22403770
Schulz, Daniela N; Schneider, Francine; de Vries, Hein; van Osch, Liesbeth A D M; van Nierop, Peter W M; Kremers, Stef P J
2012-03-08
Unhealthy lifestyle behaviors often co-occur and are related to chronic diseases. One effective method to change multiple lifestyle behaviors is web-based computer tailoring. Dropout from Internet interventions, however, is rather high, and it is challenging to retain participants in web-based tailored programs, especially programs targeting multiple behaviors. To date, it is unknown how much information people can handle in one session while taking part in a multiple behavior change intervention, which could be presented either sequentially (one behavior at a time) or simultaneously (all behaviors at once). The first objective was to compare dropout rates of 2 computer-tailored interventions: a sequential and a simultaneous strategy. The second objective was to assess which personal characteristics are associated with completion rates of the 2 interventions. Using an RCT design, demographics, health status, physical activity, vegetable consumption, fruit consumption, alcohol intake, and smoking were self-assessed through web-based questionnaires among 3473 adults, recruited through Regional Health Authorities in the Netherlands in the autumn of 2009. First, a health risk appraisal was offered, indicating whether respondents were meeting the 5 national health guidelines. Second, psychosocial determinants of the lifestyle behaviors were assessed and personal advice was provided, about one or more lifestyle behaviors. Our findings indicate a high non-completion rate for both types of intervention (71.0%; n = 2167), with more incompletes in the simultaneous intervention (77.1%; n = 1169) than in the sequential intervention (65.0%; n = 998). In both conditions, discontinuation was predicted by a lower age (sequential condition: OR = 1.04; P < .001; CI = 1.02-1.05; simultaneous condition: OR = 1.04; P < .001; CI = 1.02-1.05) and an unhealthy lifestyle (sequential condition: OR = 0.86; P = .01; CI = 0.76-0.97; simultaneous condition: OR = 0.49; P < .001; CI = 0.42-0.58). In the sequential intervention, being male (OR = 1.27; P = .04; CI = 1.01-1.59) also predicted dropout. When respondents failed to adhere to at least 2 of the guidelines, those receiving the simultaneous intervention were more inclined to drop out than were those receiving the sequential intervention. Possible reasons for the higher dropout rate in our simultaneous intervention may be the amount of time required and information overload. Strategies to optimize program completion as well as continued use of computer-tailored interventions should be studied. Dutch Trial Register NTR2168.
Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.; ...
2016-03-23
Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Ian E.; Aasen, Erik W.; Oliveira, Julia L.
Doping polymeric semiconductors often drastically reduces the solubility of the polymer, leading to difficulties in processing doped films. Here, we compare optical, electrical, and morphological properties of P3HT films doped with F4TCNQ, both from mixed solutions and using sequential solution processing with orthogonal solvents. We demonstrate that sequential doping occurs rapidly (<1 s), and that the film doping level can be precisely controlled by varying the concentration of the doping solution. Furthermore, the choice of sequential doping solvent controls whether dopant anions are included or excluded from polymer crystallites. Atomic force microscopy (AFM) reveals that sequential doping produces significantly moremore » uniform films on the nanoscale than the mixed-solution method. In addition, we show that mixed-solution doping induces the formation of aggregates even at low doping levels, resulting in drastic changes to film morphology. Sequentially coated films show 3–15 times higher conductivities at a given doping level than solution-doped films, with sequentially doped films processed to exclude dopant anions from polymer crystallites showing the highest conductivities. In conclusion, we propose a mechanism for doping induced aggregation in which the shift of the polymer HOMO level upon aggregation couples ionization and solvation energies. To show that the methodology is widely applicable, we demonstrate that several different polymer:dopant systems can be prepared by sequential doping.« less
Concurrent processing simulation of the space station
NASA Technical Reports Server (NTRS)
Gluck, R.; Hale, A. L.; Sunkel, John W.
1989-01-01
The development of a new capability for the time-domain simulation of multibody dynamic systems and its application to the study of a large angle rotational maneuvers of the Space Station is described. The effort was divided into three sequential tasks, which required significant advancements of the state-of-the art to accomplish. These were: (1) the development of an explicit mathematical model via symbol manipulation of a flexible, multibody dynamic system; (2) the development of a methodology for balancing the computational load of an explicit mathematical model for concurrent processing; and (3) the implementation and successful simulation of the above on a prototype Custom Architectured Parallel Processing System (CAPPS) containing eight processors. The throughput rate achieved by the CAPPS operating at only 70 percent efficiency, was 3.9 times greater than that obtained sequentially by the IBM 3090 supercomputer simulating the same problem. More significantly, analysis of the results leads to the conclusion that the relative cost effectiveness of concurrent vs. sequential digital computation will grow substantially as the computational load is increased. This is a welcomed development in an era when very complex and cumbersome mathematical models of large space vehicles must be used as substitutes for full scale testing which has become impractical.
Parallelization of fine-scale computation in Agile Multiscale Modelling Methodology
NASA Astrophysics Data System (ADS)
Macioł, Piotr; Michalik, Kazimierz
2016-10-01
Nowadays, multiscale modelling of material behavior is an extensively developed area. An important obstacle against its wide application is high computational demands. Among others, the parallelization of multiscale computations is a promising solution. Heterogeneous multiscale models are good candidates for parallelization, since communication between sub-models is limited. In this paper, the possibility of parallelization of multiscale models based on Agile Multiscale Methodology framework is discussed. A sequential, FEM based macroscopic model has been combined with concurrently computed fine-scale models, employing a MatCalc thermodynamic simulator. The main issues, being investigated in this work are: (i) the speed-up of multiscale models with special focus on fine-scale computations and (ii) on decreasing the quality of computations enforced by parallel execution. Speed-up has been evaluated on the basis of Amdahl's law equations. The problem of `delay error', rising from the parallel execution of fine scale sub-models, controlled by the sequential macroscopic sub-model is discussed. Some technical aspects of combining third-party commercial modelling software with an in-house multiscale framework and a MPI library are also discussed.
Collaborative Brain-Computer Interface for Aiding Decision-Making
Poli, Riccardo; Valeriani, Davide; Cinel, Caterina
2014-01-01
We look at the possibility of integrating the percepts from multiple non-communicating observers as a means of achieving better joint perception and better group decisions. Our approach involves the combination of a brain-computer interface with human behavioural responses. To test ideas in controlled conditions, we asked observers to perform a simple matching task involving the rapid sequential presentation of pairs of visual patterns and the subsequent decision as whether the two patterns in a pair were the same or different. We recorded the response times of observers as well as a neural feature which predicts incorrect decisions and, thus, indirectly indicates the confidence of the decisions made by the observers. We then built a composite neuro-behavioural feature which optimally combines the two measures. For group decisions, we uses a majority rule and three rules which weigh the decisions of each observer based on response times and our neural and neuro-behavioural features. Results indicate that the integration of behavioural responses and neural features can significantly improve accuracy when compared with the majority rule. An analysis of event-related potentials indicates that substantial differences are present in the proximity of the response for correct and incorrect trials, further corroborating the idea of using hybrids of brain-computer interfaces and traditional strategies for improving decision making. PMID:25072739
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
Rapid Multistep Synthesis of 1,2,4-Oxadiazoles in a Single Continuous Microreactor Sequence
Grant, Daniel; Dahl, Russell; Cosford, Nicholas D. P.
2009-01-01
A general method for the synthesis of bis-substituted 1,2,4-oxadiazoles from readily available arylnitriles and activated carbonyls in a single continuous microreactor sequence is described. The synthesis incorporates three sequential microreactors to produce 1,2,4-oxadiazoles in ~30 min in quantities (40–80 mg) sufficient for full characterization and rapid library supply. PMID:18687005
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Parallelization of NAS Benchmarks for Shared Memory Multiprocessors
NASA Technical Reports Server (NTRS)
Waheed, Abdul; Yan, Jerry C.; Saini, Subhash (Technical Monitor)
1998-01-01
This paper presents our experiences of parallelizing the sequential implementation of NAS benchmarks using compiler directives on SGI Origin2000 distributed shared memory (DSM) system. Porting existing applications to new high performance parallel and distributed computing platforms is a challenging task. Ideally, a user develops a sequential version of the application, leaving the task of porting to new generations of high performance computing systems to parallelization tools and compilers. Due to the simplicity of programming shared-memory multiprocessors, compiler developers have provided various facilities to allow the users to exploit parallelism. Native compilers on SGI Origin2000 support multiprocessing directives to allow users to exploit loop-level parallelism in their programs. Additionally, supporting tools can accomplish this process automatically and present the results of parallelization to the users. We experimented with these compiler directives and supporting tools by parallelizing sequential implementation of NAS benchmarks. Results reported in this paper indicate that with minimal effort, the performance gain is comparable with the hand-parallelized, carefully optimized, message-passing implementations of the same benchmarks.
Optical flip-flops and sequential logic circuits using a liquid crystal light valve
NASA Technical Reports Server (NTRS)
Fatehi, M. T.; Collins, S. A., Jr.; Wasmundt, K. C.
1984-01-01
This paper is concerned with the application of optics to digital computing. A Hughes liquid crystal light valve is used as an active optical element where a weak light beam can control a strong light beam with either a positive or negative gain characteristic. With this device as the central element the ability to produce bistable states from which different types of flip-flop can be implemented is demonstrated. In this paper, some general comments are first presented on digital computing as applied to optics. This is followed by a discussion of optical implementation of various types of flip-flop. These flip-flops are then used in the design of optical equivalents to a few simple sequential circuits such as shift registers and accumulators. As a typical sequential machine, a schematic layout for an optical binary temporal integrator is presented. Finally, a suggested experimental configuration for an optical master-slave flip-flop array is given.
Reengineering the Project Design Process
NASA Technical Reports Server (NTRS)
Casani, E.; Metzger, R.
1994-01-01
In response to NASA's goal of working faster, better and cheaper, JPL has developed extensive plans to minimize cost, maximize customer and employee satisfaction, and implement small- and moderate-size missions. These plans include improved management structures and processes, enhanced technical design processes, the incorporation of new technology, and the development of more economical space- and ground-system designs. The Laboratory's new Flight Projects Implementation Office has been chartered to oversee these innovations and the reengineering of JPL's project design process, including establishment of the Project Design Center and the Flight System Testbed. Reengineering at JPL implies a cultural change whereby the character of its design process will change from sequential to concurrent and from hierarchical to parallel. The Project Design Center will support missions offering high science return, design to cost, demonstrations of new technology, and rapid development. Its computer-supported environment will foster high-fidelity project life-cycle development and cost estimating.
Cosson, Steffen; Danial, Maarten; Saint-Amans, Julien Rosselgong; Cooper-White, Justin J
2017-04-01
Advanced polymerization methodologies, such as reversible addition-fragmentation transfer (RAFT), allow unprecedented control over star polymer composition, topology, and functionality. However, using RAFT to produce high throughput (HTP) combinatorial star polymer libraries remains, to date, impracticable due to several technical limitations. Herein, the methodology "rapid one-pot sequential aqueous RAFT" or "rosa-RAFT," in which well-defined homo-, copolymer, and mikto-arm star polymers can be prepared in very low to medium reaction volumes (50 µL to 2 mL) via an "arm-first" approach in air within minutes, is reported. Due to the high conversion of a variety of acrylamide/acrylate monomers achieved during each successive short reaction step (each taking 3 min), the requirement for intermediary purification is avoided, drastically facilitating and accelerating the star synthesis process. The presented methodology enables RAFT to be applied to HTP polymeric bio/nanomaterials discovery pipelines, in which hundreds of complex polymeric formulations can be rapidly produced, screened, and scaled up for assessment in a wide range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Brown, C. David; Ih, Charles S.; Arce, Gonzalo R.; Fertell, David A.
1987-01-01
Vision systems for mobile robots or autonomous vehicles navigating in an unknown terrain environment must provide a rapid and accurate method of segmenting the scene ahead into regions of pathway and background. A major distinguishing feature between the pathway and background is the three dimensional texture of these two regions. Typical methods of textural image segmentation are very computationally intensive, often lack the required robustness, and are incapable of sensing the three dimensional texture of various regions of the scene. A method is presented where scanned laser projected lines of structured light, viewed by a stereoscopically located single video camera, resulted in an image in which the three dimensional characteristics of the scene were represented by the discontinuity of the projected lines. This image was conducive to processing with simple regional operators to classify regions as pathway or background. Design of some operators and application methods, and demonstration on sample images are presented. This method provides rapid and robust scene segmentation capability that has been implemented on a microcomputer in near real time, and should result in higher speed and more reliable robotic or autonomous navigation in unstructured environments.
Computer-Based Career Interventions.
ERIC Educational Resources Information Center
Mau, Wei-Cheng
The possible utilities and limitations of computer-assisted career guidance systems (CACG) have been widely discussed although the effectiveness of CACG has not been systematically considered. This paper investigates the effectiveness of a theory-based CACG program, integrating Sequential Elimination and Expected Utility strategies. Three types of…
2009-01-01
Background The rapid advancement of computer and information technology in recent years has resulted in the rise of e-learning technologies to enhance and complement traditional classroom teaching in many fields, including bioinformatics. This paper records the experience of implementing e-learning technology to support problem-based learning (PBL) in the teaching of two undergraduate bioinformatics classes in the National University of Singapore. Results Survey results further established the efficiency and suitability of e-learning tools to supplement PBL in bioinformatics education. 63.16% of year three bioinformatics students showed a positive response regarding the usefulness of the Learning Activity Management System (LAMS) e-learning tool in guiding the learning and discussion process involved in PBL and in enhancing the learning experience by breaking down PBL activities into a sequential workflow. On the other hand, 89.81% of year two bioinformatics students indicated that their revision process was positively impacted with the use of LAMS for guiding the learning process, while 60.19% agreed that the breakdown of activities into a sequential step-by-step workflow by LAMS enhances the learning experience Conclusion We show that e-learning tools are useful for supplementing PBL in bioinformatics education. The results suggest that it is feasible to develop and adopt e-learning tools to supplement a variety of instructional strategies in the future. PMID:19958511
Schlegel, Marcel; Schneider, Christoph
2018-05-09
The first Sc(OTf) 3 -catalyzed dehydration of 2-hydroxy oxime ethers to generate benzylic stabilized 1-azaallyl cations, which are captured by 1,3-carbonyls, is described. A subsequent addition of primary amines in a sequential three-component reaction affords highly substituted and densely functionalized tetrahydroindeno[2,1- b]pyrroles as single diastereomers with up to quantitative yield. Thus, three new σ-bonds and two vicinal quaternary stereogenic centers are generated in a one-pot operation.
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
rehearsal mechanism in the model. To evaluate the model we developed a simple new goodness-of-fit test based on analysis of variance that offers an...repeated step). Sequen- tial constraints are common in medicine, equipment maintenance, computer programming and technical support, data analysis ...legal analysis , accounting, and many other home and workplace environ- ments. Sequential constraints also play a role in such basic cognitive processes
ERIC Educational Resources Information Center
Mathinos, Debra A.; Leonard, Ann Scheier
The study examines the use of LOGO, a computer language, with 19 learning disabled (LD) and 19 non-LD students in grades 4-6. Ss were randomly assigned to one of two instructional groups: sequential or whole-task, each with 10 LD and 10 non-LD students. The sequential method features a carefully ordered plan for teaching LOGO commands; the…
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-02-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.
1991-01-01
The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.
TDRSS-user orbit determination using batch least-squares and sequential methods
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hakimi, M.; Samii, Mina V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), and operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the January 17-23, 1991, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were less than 40 meters after the filter had reached steady state.
Comparison of ERBS orbit determination accuracy using batch least-squares and sequential methods
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Fabien, S. M.; Mistretta, G. D.; Hart, R. C.; Doll, C. E.
1991-10-01
The Flight Dynamics Div. (FDD) at NASA-Goddard commissioned a study to develop the Real Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination of spacecraft on a DOS based personal computer (PC). An overview is presented of RTOD/E capabilities and the results are presented of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOS/E on a PC with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. RTOD/E was used to perform sequential orbit determination for the Earth Radiation Budget Satellite (ERBS), and the Goddard Trajectory Determination System (GTDS) was used to perform the batch least squares orbit determination. The estimated ERBS ephemerides were obtained for the Aug. 16 to 22, 1989, timeframe, during which intensive TDRSS tracking data for ERBS were available. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for ERBS; the solution differences were less than 40 meters after the filter had reached steady state.
Lee, M H; Ahn, H J; Park, J H; Park, Y J; Song, K
2011-02-01
This paper presents a quantitative and rapid method of sequential separation of Pu, (90)Sr and (241)Am nuclides in environmental soil samples with an anion exchange resin and Sr Spec resin. After the sample solution was passed through an anion exchange column connected to a Sr Spec column, Pu isotopes were purified from the anion exchange column. Strontium-90 was separated from other interfering elements by the Sr Spec column. Americium-241 was purified from lanthanides by the anion exchange resin after oxalate co-precipitation. Measurement of Pu and Am isotopes was carried out using an α-spectrometer. Strontium-90 was measured by a low-level liquid scintillation counter. The radiochemical procedure of Pu, (90)Sr and (241)Am nuclides investigated in this study validated by application to IAEA reference materials and environmental soil samples. Copyright © 2010 Elsevier Ltd. All rights reserved.
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
Learning Sequential Composition Control.
Najafi, Esmaeil; Babuska, Robert; Lopes, Gabriel A D
2016-11-01
Sequential composition is an effective supervisory control method for addressing control problems in nonlinear dynamical systems. It executes a set of controllers sequentially to achieve a control specification that cannot be realized by a single controller. As these controllers are designed offline, sequential composition cannot address unmodeled situations that might occur during runtime. This paper proposes a learning approach to augment the standard sequential composition framework by using online learning to handle unforeseen situations. New controllers are acquired via learning and added to the existing supervisory control structure. In the proposed setting, learning experiments are restricted to take place within the domain of attraction (DOA) of the existing controllers. This guarantees that the learning process is safe (i.e., the closed loop system is always stable). In addition, the DOA of the new learned controller is approximated after each learning trial. This keeps the learning process short as learning is terminated as soon as the DOA of the learned controller is sufficiently large. The proposed approach has been implemented on two nonlinear systems: 1) a nonlinear mass-damper system and 2) an inverted pendulum. The results show that in both cases a new controller can be rapidly learned and added to the supervisory control structure.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Robust sequential working memory recall in heterogeneous cognitive networks
Rabinovich, Mikhail I.; Sokolov, Yury; Kozma, Robert
2014-01-01
Psychiatric disorders are often caused by partial heterogeneous disinhibition in cognitive networks, controlling sequential and spatial working memory (SWM). Such dynamic connectivity changes suggest that the normal relationship between the neuronal components within the network deteriorates. As a result, competitive network dynamics is qualitatively altered. This dynamics defines the robust recall of the sequential information from memory and, thus, the SWM capacity. To understand pathological and non-pathological bifurcations of the sequential memory dynamics, here we investigate the model of recurrent inhibitory-excitatory networks with heterogeneous inhibition. We consider the ensemble of units with all-to-all inhibitory connections, in which the connection strengths are monotonically distributed at some interval. Based on computer experiments and studying the Lyapunov exponents, we observed and analyzed the new phenomenon—clustered sequential dynamics. The results are interpreted in the context of the winnerless competition principle. Accordingly, clustered sequential dynamics is represented in the phase space of the model by two weakly interacting quasi-attractors. One of them is similar to the sequential heteroclinic chain—the regular image of SWM, while the other is a quasi-chaotic attractor. Coexistence of these quasi-attractors means that the recall of the normal information sequence is intermittently interrupted by episodes with chaotic dynamics. We indicate potential dynamic ways for augmenting damaged working memory and other cognitive functions. PMID:25452717
A wireless breathing-training support system for kinesitherapy.
Tawa, Hiroki; Yonezawa, Yoshiharu; Maki, Hiromichi; Ogawa, Hidekuni; Ninomiya, Ishio; Sada, Kouji; Hamada, Shingo; Caldwell, W Morton
2009-01-01
We have developed a new wireless breathing-training support system for kinesitherapy. The system consists of an optical sensor, an accelerometer, a microcontroller, a Bluetooth module and a laptop computer. The optical sensor, which is attached to the patient's chest, measures chest circumference. The low frequency components of circumference are mainly generated by breathing. The optical sensor outputs the circumference as serial digital data. The accelerometer measures the dynamic acceleration force produced by exercise, such as walking. The microcontroller sequentially samples this force. The acceleration force and chest circumference are sent sequentially via Bluetooth to a physical therapist's laptop computer, which receives and stores the data. The computer simultaneously displays these data so that the physical therapist can monitor the patient's breathing and acceleration waveforms and give instructions to the patient in real time during exercise. Moreover, the system enables a quantitative training evaluation and calculation the volume of air inspired and expired by the lungs.
Takita, Eiji; Kohda, Katsunori; Tomatsu, Hajime; Hanano, Shigeru; Moriya, Kanami; Hosouchi, Tsutomu; Sakurai, Nozomu; Suzuki, Hideyuki; Shinmyo, Atsuhiko; Shibata, Daisuke
2013-01-01
Ligation, the joining of DNA fragments, is a fundamental procedure in molecular cloning and is indispensable to the production of genetically modified organisms that can be used for basic research, the applied biosciences, or both. Given that many genes cooperate in various pathways, incorporating multiple gene cassettes in tandem in a transgenic DNA construct for the purpose of genetic modification is often necessary when generating organisms that produce multiple foreign gene products. Here, we describe a novel method, designated PRESSO (precise sequential DNA ligation on a solid substrate), for the tandem ligation of multiple DNA fragments. We amplified donor DNA fragments with non-palindromic ends, and ligated the fragment to acceptor DNA fragments on solid beads. After the final donor DNA fragments, which included vector sequences, were joined to the construct that contained the array of fragments, the ligation product (the construct) was thereby released from the beads via digestion with a rare-cut meganuclease; the freed linear construct was circularized via an intra-molecular ligation. PRESSO allowed us to rapidly and efficiently join multiple genes in an optimized order and orientation. This method can overcome many technical challenges in functional genomics during the post-sequencing generation. PMID:23897972
DCS-Neural-Network Program for Aircraft Control and Testing
NASA Technical Reports Server (NTRS)
Jorgensen, Charles C.
2006-01-01
A computer program implements a dynamic-cell-structure (DCS) artificial neural network that can perform such tasks as learning selected aerodynamic characteristics of an airplane from wind-tunnel test data and computing real-time stability and control derivatives of the airplane for use in feedback linearized control. A DCS neural network is one of several types of neural networks that can incorporate additional nodes in order to rapidly learn increasingly complex relationships between inputs and outputs. In the DCS neural network implemented by the present program, the insertion of nodes is based on accumulated error. A competitive Hebbian learning rule (a supervised-learning rule in which connection weights are adjusted to minimize differences between actual and desired outputs for training examples) is used. A Kohonen-style learning rule (derived from a relatively simple training algorithm, implements a Delaunay triangulation layout of neurons) is used to adjust node positions during training. Neighborhood topology determines which nodes are used to estimate new values. The network learns, starting with two nodes, and adds new nodes sequentially in locations chosen to maximize reductions in global error. At any given time during learning, the error becomes homogeneously distributed over all nodes.
Polymeric micelles for multi-drug delivery in cancer.
Cho, Hyunah; Lai, Tsz Chung; Tomoda, Keishiro; Kwon, Glen S
2015-02-01
Drug combinations are common in cancer treatment and are rapidly evolving, moving beyond chemotherapy combinations to combinations of signal transduction inhibitors. For the delivery of drug combinations, i.e., multi-drug delivery, major considerations are synergy, dose regimen (concurrent versus sequential), pharmacokinetics, toxicity, and safety. In this contribution, we review recent research on polymeric micelles for multi-drug delivery in cancer. In concurrent drug delivery, polymeric micelles deliver multi-poorly water-soluble anticancer agents, satisfying strict requirements in solubility, stability, and safety. In sequential drug delivery, polymeric micelles participate in pretreatment strategies that "prime" solid tumors and enhance the penetration of secondarily administered anticancer agent or nanocarrier. The improved delivery of multiple poorly water-soluble anticancer agents by polymeric micelles via concurrent or sequential regimens offers novel and interesting strategies for drug combinations in cancer treatment.
NASA Astrophysics Data System (ADS)
Ferrando, N.; Gosálvez, M. A.; Cerdá, J.; Gadea, R.; Sato, K.
2011-03-01
Presently, dynamic surface-based models are required to contain increasingly larger numbers of points and to propagate them over longer time periods. For large numbers of surface points, the octree data structure can be used as a balance between low memory occupation and relatively rapid access to the stored data. For evolution rules that depend on neighborhood states, extended simulation periods can be obtained by using simplified atomistic propagation models, such as the Cellular Automata (CA). This method, however, has an intrinsic parallel updating nature and the corresponding simulations are highly inefficient when performed on classical Central Processing Units (CPUs), which are designed for the sequential execution of tasks. In this paper, a series of guidelines is presented for the efficient adaptation of octree-based, CA simulations of complex, evolving surfaces into massively parallel computing hardware. A Graphics Processing Unit (GPU) is used as a cost-efficient example of the parallel architectures. For the actual simulations, we consider the surface propagation during anisotropic wet chemical etching of silicon as a computationally challenging process with a wide-spread use in microengineering applications. A continuous CA model that is intrinsically parallel in nature is used for the time evolution. Our study strongly indicates that parallel computations of dynamically evolving surfaces simulated using CA methods are significantly benefited by the incorporation of octrees as support data structures, substantially decreasing the overall computational time and memory usage.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Parallel Algorithm Solves Coupled Differential Equations
NASA Technical Reports Server (NTRS)
Hayashi, A.
1987-01-01
Numerical methods adapted to concurrent processing. Algorithm solves set of coupled partial differential equations by numerical integration. Adapted to run on hypercube computer, algorithm separates problem into smaller problems solved concurrently. Increase in computing speed with concurrent processing over that achievable with conventional sequential processing appreciable, especially for large problems.
Dismal: A Spreadsheet for Sequential Data Analysis and HCI Experimentation
2002-01-24
Hambly, Alder, Wyatt- Millington, Shrayane, Crawshaw , et al., 1996). Table 2 provides some example data. An automatically generated header comes first...Shrayane, N. M., Crawshaw , C. M., & Hockey, G. R. J. (1996). Investigating the human-computer interface using the Datalogger. Behavior Research Methods, Instruments, & Computers, 28(4), 603-606.
Collaborative Filtering Based on Sequential Extraction of User-Item Clusters
NASA Astrophysics Data System (ADS)
Honda, Katsuhiro; Notsu, Akira; Ichihashi, Hidetomo
Collaborative filtering is a computational realization of “word-of-mouth” in network community, in which the items prefered by “neighbors” are recommended. This paper proposes a new item-selection model for extracting user-item clusters from rectangular relation matrices, in which mutual relations between users and items are denoted in an alternative process of “liking or not”. A technique for sequential co-cluster extraction from rectangular relational data is given by combining the structural balancing-based user-item clustering method with sequential fuzzy cluster extraction appraoch. Then, the tecunique is applied to the collaborative filtering problem, in which some items may be shared by several user clusters.
Computational Particle Dynamic Simulations on Multicore Processors (CPDMu) Final Report Phase I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmalz, Mark S
2011-07-24
Statement of Problem - Department of Energy has many legacy codes for simulation of computational particle dynamics and computational fluid dynamics applications that are designed to run on sequential processors and are not easily parallelized. Emerging high-performance computing architectures employ massively parallel multicore architectures (e.g., graphics processing units) to increase throughput. Parallelization of legacy simulation codes is a high priority, to achieve compatibility, efficiency, accuracy, and extensibility. General Statement of Solution - A legacy simulation application designed for implementation on mainly-sequential processors has been represented as a graph G. Mathematical transformations, applied to G, produce a graph representation {und G}more » for a high-performance architecture. Key computational and data movement kernels of the application were analyzed/optimized for parallel execution using the mapping G {yields} {und G}, which can be performed semi-automatically. This approach is widely applicable to many types of high-performance computing systems, such as graphics processing units or clusters comprised of nodes that contain one or more such units. Phase I Accomplishments - Phase I research decomposed/profiled computational particle dynamics simulation code for rocket fuel combustion into low and high computational cost regions (respectively, mainly sequential and mainly parallel kernels), with analysis of space and time complexity. Using the research team's expertise in algorithm-to-architecture mappings, the high-cost kernels were transformed, parallelized, and implemented on Nvidia Fermi GPUs. Measured speedups (GPU with respect to single-core CPU) were approximately 20-32X for realistic model parameters, without final optimization. Error analysis showed no loss of computational accuracy. Commercial Applications and Other Benefits - The proposed research will constitute a breakthrough in solution of problems related to efficient parallel computation of particle and fluid dynamics simulations. These problems occur throughout DOE, military and commercial sectors: the potential payoff is high. We plan to license or sell the solution to contractors for military and domestic applications such as disaster simulation (aerodynamic and hydrodynamic), Government agencies (hydrological and environmental simulations), and medical applications (e.g., in tomographic image reconstruction). Keywords - High-performance Computing, Graphic Processing Unit, Fluid/Particle Simulation. Summary for Members of Congress - Department of Energy has many simulation codes that must compute faster, to be effective. The Phase I research parallelized particle/fluid simulations for rocket combustion, for high-performance computing systems.« less
Online sequential Monte Carlo smoother for partially observed diffusion processes
NASA Astrophysics Data System (ADS)
Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain
2018-12-01
This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.
Rise and fall of political complexity in island South-East Asia and the Pacific.
Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth
2010-10-14
There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.
NASA Astrophysics Data System (ADS)
Shyu, Mei-Ling; Huang, Zifang; Luo, Hongli
In recent years, pervasive computing infrastructures have greatly improved the interaction between human and system. As we put more reliance on these computing infrastructures, we also face threats of network intrusion and/or any new forms of undesirable IT-based activities. Hence, network security has become an extremely important issue, which is closely connected with homeland security, business transactions, and people's daily life. Accurate and efficient intrusion detection technologies are required to safeguard the network systems and the critical information transmitted in the network systems. In this chapter, a novel network intrusion detection framework for mining and detecting sequential intrusion patterns is proposed. The proposed framework consists of a Collateral Representative Subspace Projection Modeling (C-RSPM) component for supervised classification, and an inter-transactional association rule mining method based on Layer Divided Modeling (LDM) for temporal pattern analysis. Experiments on the KDD99 data set and the traffic data set generated by a private LAN testbed show promising results with high detection rates, low processing time, and low false alarm rates in mining and detecting sequential intrusion detections.
STS-41 MS Shepherd uses DTO 1206 portable computer on OV-103's middeck
1990-10-10
STS-41 Mission Specialist (MS) William M. Shepherd uses Detailed Test Objective (DTO) Space Station Cursor Control Device Evaluation MACINTOSH portable computer on the middeck of Discovery, Orbiter Vehicle (OV) 103. The computer is velcroed to forward lockers MF71C and MF71E. Surrounding Shepherd are checklists, the field sequential (FS) crew cabin camera, and a lighting fixture.
Development of an ADP Training Program to Serve the EPA Data Processing Community.
1976-07-29
divide, compute , perform and alter statements; data representation and conversion; table processing; and indexed sequential and random access file...processing. The course workshop will include the testing of coded exercises and problems on a computer system. CLASS SIZE: Individualized METHODS/CONDUCT...familiarization with computer concepts will be helpful. OBJECTIVES OF CURRICULUM After completing this course, the student should have developed a working
Rapid prototyping in aortic surgery.
Bangeas, Petros; Voulalas, Grigorios; Ktenidis, Kiriakos
2016-04-01
3D printing provides the sequential addition of material layers and, thus, the opportunity to print parts and components made of different materials with variable mechanical and physical properties. It helps us create 3D anatomical models for the better planning of surgical procedures when needed, since it can reveal any complex anatomical feature. Images of abdominal aortic aneurysms received by computed tomographic angiography were converted into 3D images using a Google SketchUp free software and saved in stereolithography format. Using a 3D printer (Makerbot), a model made of polylactic acid material (thermoplastic filament) was printed. A 3D model of an abdominal aorta aneurysm was created in 138 min, while the model was a precise copy of the aorta visualized in the computed tomographic images. The total cost (including the initial cost of the printer) reached 1303.00 euros. 3D imaging and modelling using different materials can be very useful in cases when anatomical difficulties are recognized through the computed tomographic images and a tactile approach is demanded preoperatively. In this way, major complications during abdominal aorta aneurysm management can be predicted and prevented. Furthermore, the model can be used as a mould; the development of new, more biocompatible, less antigenic and individualized can become a challenge in the future. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Trott, C M; Ouyang, J; El Fakhri, G
2010-11-21
Simultaneous rest perfusion/fatty-acid metabolism studies have the potential to replace sequential rest/stress perfusion studies for the assessment of cardiac function. Simultaneous acquisition has the benefits of increased signal and lack of need for patient stress, but is complicated by cross-talk between the two radionuclide signals. We consider a simultaneous rest (99m)Tc-sestamibi/(123)I-BMIPP imaging protocol in place of the commonly used sequential rest/stress (99m)Tc-sestamibi protocol. The theoretical precision with which the severity of a cardiac defect and the transmural extent of infarct can be measured is computed for simultaneous and sequential SPECT imaging, and their performance is compared for discriminating (1) degrees of defect severity and (2) sub-endocardial from transmural defects. We consider cardiac infarcts for which reduced perfusion and metabolism are observed. From an information perspective, simultaneous imaging is found to yield comparable or improved performance compared with sequential imaging for discriminating both severity of defect and transmural extent of infarct, for three defects of differing location and size.
Ji, Qiang; Shi, YunQing; Xia, LiMin; Ma, RunHua; Shen, JinQiang; Lai, Hao; Ding, WenJun; Wang, ChunSheng
2017-12-25
To evaluate in-hospital and mid-term outcomes of sequential vs. separate grafting of in situ skeletonized left internal mammary artery (LIMA) to the left coronary system in a single-center, propensity-matched study.Methods and Results:After propensity score-matching, 120 pairs of patients undergoing first scheduled isolated coronary artery bypass grafting (CABG) with in situ skeletonized LIMA grafting to the left anterior descending artery (LAD) territory were entered into a sequential group (sequential grafting of LIMA to the diagonal artery and then to the LAD) or a control group (separate grafting of LIMA to the LAD). The in-hospital and follow-up clinical outcomes and follow-up LIMA graft patency were compared. Both propensity score-matched groups had similar in-hospital and follow-up clinical outcomes. Sequential LIMA grafting was not found to be an independent predictor of adverse events. During a follow-up period of 27.0±7.3 months, 99.1% patency for the diagonal site and 98.3% for the LAD site were determined by coronary computed tomographic angiography after sequential LIMA grafting, both of which were similar with graft patency of separate grafting of in situ skeletonized LIMA to the LAD. Revascularization of the left coronary system using a skeletonized LIMA resulted in excellent in-hospital and mid-term clinical outcomes and graft patency using sequential grafting.
Lin, Lu; Wang, Yi-Ning; Kong, Ling-Yan; Jin, Zheng-Yu; Lu, Guang-Ming; Zhang, Zhao-Qi; Cao, Jian; Li, Shuo; Song, Lan; Wang, Zhi-Wei; Zhou, Kang; Wang, Ming
2013-01-01
Objective To evaluate the image quality (IQ) and radiation dose of 128-slice dual-source computed tomography (DSCT) coronary angiography using prospectively electrocardiogram (ECG)-triggered sequential scan mode compared with ECG-gated spiral scan mode in a population with atrial fibrillation. Methods Thirty-two patients with suspected coronary artery disease and permanent atrial fibrillation referred for a second-generation 128-slice DSCT coronary angiography were included in the prospective study. Of them, 17 patients (sequential group) were randomly selected to use a prospectively ECG-triggered sequential scan, while the other 15 patients (spiral group) used a retrospectively ECG-gated spiral scan. The IQ was assessed by two readers independently, using a four-point grading scale from excel-lent (grade 1) to non-assessable (grade 4), based on the American Heart Association 15-segment model. IQ of each segment and effective dose of each patient were compared between the two groups. Results The mean heart rate (HR) of the sequential group was 96±27 beats per minute (bpm) with a variation range of 73±25 bpm, while the mean HR of the spiral group was 86±22 bpm with a variationrange of 65±24 bpm. Both of the mean HR (t=1.91, P=0.243) and HR variation range (t=0.950, P=0.350) had no significant difference between the two groups. In per-segment analysis, IQ of the sequential group vs. spiral group was rated as excellent (grade 1) in 190/244 (78%) vs. 177/217 (82%) by reader1 and 197/245 (80%) vs. 174/214 (81%) by reader2, as non-assessable (grade 4) in 4/244 (2%) vs. 2/217 (1%) by reader1 and 6/245 (2%) vs. 4/214 (2%) by reader2. Overall averaged IQ per-patient in the sequential and spiral group showed equally good (1.27±0.19 vs. 1.25±0.22, Z=-0.834, P=0.404). The effective radiation dose of the sequential group reduced significantly compared with the spiral group (4.88±1.77 mSv vs. 10.20±3.64 mSv; t=-5.372, P=0.000). Conclusion Compared with retrospectively ECG-gated spiral scan, prospectively ECG-triggered sequential DSCT coronary angiography provides similarly diagnostically valuable images in patients with atrial fibrillation and significantly reduces radiation dose.
Sols, Ignasi; DuBrow, Sarah; Davachi, Lila; Fuentemilla, Lluís
2017-11-20
Although everyday experiences unfold continuously over time, shifts in context, or event boundaries, can influence how those events come to be represented in memory [1-4]. Specifically, mnemonic binding across sequential representations is more challenging at context shifts, such that successful temporal associations are more likely to be formed within than across contexts [1, 2, 5-9]. However, in order to preserve a subjective sense of continuity, it is important that the memory system bridge temporally adjacent events, even if they occur in seemingly distinct contexts. Here, we used pattern similarity analysis to scalp electroencephalographic (EEG) recordings during a sequential learning task [2, 3] in humans and showed that the detection of event boundaries triggered a rapid memory reinstatement of the just-encoded sequence episode. Memory reactivation was detected rapidly (∼200-800 ms from the onset of the event boundary) and was specific to context shifts that were preceded by an event sequence with episodic content. Memory reinstatement was not observed during the sequential encoding of events within an episode, indicating that memory reactivation was induced specifically upon context shifts. Finally, the degree of neural similarity between neural responses elicited during sequence encoding and at event boundaries correlated positively with participants' ability to later link across sequences of events, suggesting a critical role in binding temporally adjacent events in long-term memory. Current results shed light onto the neural mechanisms that promote episodic encoding not only for information within the event, but also, importantly, in the ability to link across events to create a memory representation of continuous experience. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hypercube matrix computation task
NASA Technical Reports Server (NTRS)
Calalo, R.; Imbriale, W.; Liewer, P.; Lyons, J.; Manshadi, F.; Patterson, J.
1987-01-01
The Hypercube Matrix Computation (Year 1986-1987) task investigated the applicability of a parallel computing architecture to the solution of large scale electromagnetic scattering problems. Two existing electromagnetic scattering codes were selected for conversion to the Mark III Hypercube concurrent computing environment. They were selected so that the underlying numerical algorithms utilized would be different thereby providing a more thorough evaluation of the appropriateness of the parallel environment for these types of problems. The first code was a frequency domain method of moments solution, NEC-2, developed at Lawrence Livermore National Laboratory. The second code was a time domain finite difference solution of Maxwell's equations to solve for the scattered fields. Once the codes were implemented on the hypercube and verified to obtain correct solutions by comparing the results with those from sequential runs, several measures were used to evaluate the performance of the two codes. First, a comparison was provided of the problem size possible on the hypercube with 128 megabytes of memory for a 32-node configuration with that available in a typical sequential user environment of 4 to 8 megabytes. Then, the performance of the codes was anlyzed for the computational speedup attained by the parallel architecture.
Computer Technology-Integrated Projects Should Not Supplant Craft Projects in Science Education
ERIC Educational Resources Information Center
Klopp, Tabatha J.; Rule, Audrey C.; Schneider, Jean Suchsland; Boody, Robert M.
2014-01-01
The current emphasis on computer technology integration and narrowing of the curriculum has displaced arts and crafts. However, the hands-on, concrete nature of craft work in science modeling enables students to understand difficult concepts and to be engaged and motivated while learning spatial, logical, and sequential thinking skills. Analogy…
ERIC Educational Resources Information Center
Chapman, Dane M.; And Others
Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…
NASA Technical Reports Server (NTRS)
Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.
1972-01-01
Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.
Multisensor surveillance data augmentation and prediction with optical multipath signal processing
NASA Astrophysics Data System (ADS)
Bush, G. T., III
1980-12-01
The spatial characteristics of an oil spill on the high seas are examined in the interest of determining whether linear-shift-invariant data processing implemented on an optical computer would be a useful tool in analyzing spill behavior. Simulations were performed on a digital computer using data obtained from a 25,000 gallon spill of soy bean oil in the open ocean. Marked changes occurred in the observed spatial frequencies when the oil spill was encountered. An optical detector may readily be developed to sound an alarm automatically when this happens. The average extent of oil spread between sequential observations was quantified by a simulation of non-holographic optical computation. Because a zero crossover was available in this computation, it may be possible to construct a system to measure automatically the amount of spread. Oil images were subjected to deconvolutional filtering to reveal the force field which acted upon the oil to cause spreading. Some features of spill-size prediction were observed. Calculations based on two sequential photos produced an image which exhibited characteristics of the third photo in that sequence.
Enhancing battery efficiency for pervasive health-monitoring systems based on electronic textiles.
Zheng, Nenggan; Wu, Zhaohui; Lin, Man; Yang, Laurence Tianruo
2010-03-01
Electronic textiles are regarded as one of the most important computation platforms for future computer-assisted health-monitoring applications. In these novel systems, multiple batteries are used in order to prolong their operational lifetime, which is a significant metric for system usability. However, due to the nonlinear features of batteries, computing systems with multiple batteries cannot achieve the same battery efficiency as those powered by a monolithic battery of equal capacity. In this paper, we propose an algorithm aiming to maximize battery efficiency globally for the computer-assisted health-care systems with multiple batteries. Based on an accurate analytical battery model, the concept of weighted battery fatigue degree is introduced and the novel battery-scheduling algorithm called predicted weighted fatigue degree least first (PWFDLF) is developed. Besides, we also discuss our attempts during search PWFDLF: a weighted round-robin (WRR) and a greedy algorithm achieving highest local battery efficiency, which reduces to the sequential discharging policy. Evaluation results show that a considerable improvement in battery efficiency can be obtained by PWFDLF under various battery configurations and current profiles compared to conventional sequential and WRR discharging policies.
SIMPLE: a sequential immunoperoxidase labeling and erasing method.
Glass, George; Papin, Jason A; Mandell, James W
2009-10-01
The ability to simultaneously visualize expression of multiple antigens in cells and tissues can provide powerful insights into cellular and organismal biology. However, standard methods are limited to the use of just two or three simultaneous probes and have not been widely adopted for routine use in paraffin-embedded tissue. We have developed a novel approach called sequential immunoperoxidase labeling and erasing (SIMPLE) that enables the simultaneous visualization of at least five markers within a single tissue section. Utilizing the alcohol-soluble peroxidase substrate 3-amino-9-ethylcarbazole, combined with a rapid non-destructive method for antibody-antigen dissociation, we demonstrate the ability to erase the results of a single immunohistochemical stain while preserving tissue antigenicity for repeated rounds of labeling. SIMPLE is greatly facilitated by the use of a whole-slide scanner, which can capture the results of each sequential stain without any information loss.
Sequential shrink photolithography for plastic microlens arrays
NASA Astrophysics Data System (ADS)
Dyer, David; Shreim, Samir; Jayadev, Shreshta; Lew, Valerie; Botvinick, Elliot; Khine, Michelle
2011-07-01
Endeavoring to push the boundaries of microfabrication with shrinkable polymers, we have developed a sequential shrink photolithography process. We demonstrate the utility of this approach by rapidly fabricating plastic microlens arrays. First, we create a mask out of the children's toy Shrinky Dinks by simply printing dots using a standard desktop printer. Upon retraction of this pre-stressed thermoplastic sheet, the dots shrink to a fraction of their original size, which we then lithographically transfer onto photoresist-coated commodity shrink wrap film. This shrink film reduces in area by 95% when briefly heated, creating smooth convex photoresist bumps down to 30 µm. Taken together, this sequential shrink process provides a complete process to create microlenses, with an almost 99% reduction in area from the original pattern size. Finally, with a lithography molding step, we emboss these bumps into optical grade plastics such as cyclic olefin copolymer for functional microlens arrays.
Sequential shrink photolithography for plastic microlens arrays.
Dyer, David; Shreim, Samir; Jayadev, Shreshta; Lew, Valerie; Botvinick, Elliot; Khine, Michelle
2011-07-18
Endeavoring to push the boundaries of microfabrication with shrinkable polymers, we have developed a sequential shrink photolithography process. We demonstrate the utility of this approach by rapidly fabricating plastic microlens arrays. First, we create a mask out of the children's toy Shrinky Dinks by simply printing dots using a standard desktop printer. Upon retraction of this pre-stressed thermoplastic sheet, the dots shrink to a fraction of their original size, which we then lithographically transfer onto photoresist-coated commodity shrink wrap film. This shrink film reduces in area by 95% when briefly heated, creating smooth convex photoresist bumps down to 30 µm. Taken together, this sequential shrink process provides a complete process to create microlenses, with an almost 99% reduction in area from the original pattern size. Finally, with a lithography molding step, we emboss these bumps into optical grade plastics such as cyclic olefin copolymer for functional microlens arrays.
Sequential shrink photolithography for plastic microlens arrays
Dyer, David; Shreim, Samir; Jayadev, Shreshta; Lew, Valerie; Botvinick, Elliot; Khine, Michelle
2011-01-01
Endeavoring to push the boundaries of microfabrication with shrinkable polymers, we have developed a sequential shrink photolithography process. We demonstrate the utility of this approach by rapidly fabricating plastic microlens arrays. First, we create a mask out of the children’s toy Shrinky Dinks by simply printing dots using a standard desktop printer. Upon retraction of this pre-stressed thermoplastic sheet, the dots shrink to a fraction of their original size, which we then lithographically transfer onto photoresist-coated commodity shrink wrap film. This shrink film reduces in area by 95% when briefly heated, creating smooth convex photoresist bumps down to 30 µm. Taken together, this sequential shrink process provides a complete process to create microlenses, with an almost 99% reduction in area from the original pattern size. Finally, with a lithography molding step, we emboss these bumps into optical grade plastics such as cyclic olefin copolymer for functional microlens arrays. PMID:21863126
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoupin, Stanislav, E-mail: sstoupin@aps.anl.gov; Shvyd’ko, Yuri; Trakhtenberg, Emil
2016-07-27
We report progress on implementation and commissioning of sequential X-ray diffraction topography at 1-BM Optics Testing Beamline of the Advanced Photon Source to accommodate growing needs of strain characterization in diffractive crystal optics and other semiconductor single crystals. The setup enables evaluation of strain in single crystals in the nearly-nondispersive double-crystal geometry. Si asymmetric collimator crystals of different crystallographic orientations were designed, fabricated and characterized using in-house capabilities. Imaging the exit beam using digital area detectors permits rapid sequential acquisition of X-ray topographs at different angular positions on the rocking curve of a crystal under investigation. Results on sensitivity andmore » spatial resolution are reported based on experiments with high-quality Si and diamond crystals. The new setup complements laboratory-based X-ray topography capabilities of the Optics group at the Advanced Photon Source.« less
Grafe, Victor G.; Hoch, James E.
1993-01-01
A sequencing and data fanout mechanism is provided for a dataflow processor is activated by an input token which causes a sequence of operations to occur by initiating a first instruction to act on data contained within the token and then executing a sequential thread of instructions identified by either a repeat count and an offset within the token, or by an offset within each preceding instruction.
Zeelenberg, René; Pecher, Diane
2015-03-01
Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.
SMAC7; Sequential multi-channel analysis with computer-7; SMA7; Metabolic panel 7; CHEM-7 ... breathing problems, diabetes or diabetes-related complications, and medicine side effects. Talk to your provider about the ...
A service-based BLAST command tool supported by cloud infrastructures.
Carrión, Abel; Blanquer, Ignacio; Hernández, Vicente
2012-01-01
Notwithstanding the benefits of distributed-computing infrastructures for empowering bioinformatics analysis tools with the needed computing and storage capability, the actual use of these infrastructures is still low. Learning curves and deployment difficulties have reduced the impact on the wide research community. This article presents a porting strategy of BLAST based on a multiplatform client and a service that provides the same interface as sequential BLAST, thus reducing learning curve and with minimal impact on their integration on existing workflows. The porting has been done using the execution and data access components from the EC project Venus-C and the Windows Azure infrastructure provided in this project. The results obtained demonstrate a low overhead on the global execution framework and reasonable speed-up and cost-efficiency with respect to a sequential version.
2012-01-01
A computer numerical control (CNC) apparatus was used to perform droplet centrifugation, droplet DNA extraction, and rapid droplet thermocycling on a single superhydrophobic surface and a multi-chambered PCB heater. Droplets were manipulated using “wire-guided” method (a pipette tip was used in this study). This methodology can be easily adapted to existing commercial robotic pipetting system, while demonstrated added capabilities such as vibrational mixing, high-speed centrifuging of droplets, simple DNA extraction utilizing the hydrophobicity difference between the tip and the superhydrophobic surface, and rapid thermocycling with a moving droplet, all with wire-guided droplet manipulations on a superhydrophobic surface and a multi-chambered PCB heater (i.e., not on a 96-well plate). Serial dilutions were demonstrated for diluting sample matrix. Centrifuging was demonstrated by rotating a 10 μL droplet at 2300 round per minute, concentrating E. coli by more than 3-fold within 3 min. DNA extraction was demonstrated from E. coli sample utilizing the disposable pipette tip to cleverly attract the extracted DNA from the droplet residing on a superhydrophobic surface, which took less than 10 min. Following extraction, the 1500 bp sequence of Peptidase D from E. coli was amplified using rapid droplet thermocycling, which took 10 min for 30 cycles. The total assay time was 23 min, including droplet centrifugation, droplet DNA extraction and rapid droplet thermocycling. Evaporation from of 10 μL droplets was not significant during these procedures, since the longest time exposure to air and the vibrations was less than 5 min (during DNA extraction). The results of these sequentially executed processes were analyzed using gel electrophoresis. Thus, this work demonstrates the adaptability of the system to replace many common laboratory tasks on a single platform (through re-programmability), in rapid succession (using droplets), and with a high level of accuracy and automation. PMID:22947281
NASA Technical Reports Server (NTRS)
Johannes, J. D.
1974-01-01
Techniques, methods, and system requirements are reported for an onboard computerized communications system that provides on-line computing capability during manned space exploration. Communications between man and computer take place by sequential execution of each discrete step of a procedure, by interactive progression through a tree-type structure to initiate tasks or by interactive optimization of a task requiring man to furnish a set of parameters. Effective communication between astronaut and computer utilizes structured vocabulary techniques and a word recognition system.
Implementation of a 3D mixing layer code on parallel computers
NASA Technical Reports Server (NTRS)
Roe, K.; Thakur, R.; Dang, T.; Bogucz, E.
1995-01-01
This paper summarizes our progress and experience in the development of a Computational-Fluid-Dynamics code on parallel computers to simulate three-dimensional spatially-developing mixing layers. In this initial study, the three-dimensional time-dependent Euler equations are solved using a finite-volume explicit time-marching algorithm. The code was first programmed in Fortran 77 for sequential computers. The code was then converted for use on parallel computers using the conventional message-passing technique, while we have not been able to compile the code with the present version of HPF compilers.
Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin
2011-01-01
This article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on computed tomography (CT) examinations. We developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. The scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing data set of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. The proposed method is able to robustly and accurately disconnect all connections between left and right lungs, and the guided dynamic programming algorithm is able to remove redundant processing.
Sequential episodes of ethylene glycol poisoning in the same person.
Sugunaraj, Jaya Prakash; Thakur, Lokendra Kumar; Jha, Kunal Kishor; Bucaloiu, Ion Dan
2017-05-27
Ethylene glycol is a common alcohol found in many household products such as household hard surface cleaner, paints, varnish, auto glass cleaner and antifreeze. While extremely toxic and often fatal on ingestion, few cases with early presentation by the patient have resulted in death; thus, rapid diagnosis is paramount to effectively treating ethylene glycol poisoning. In this study, we compare two sequential cases of ethylene glycol poisoning in a single individual, which resulted in strikingly different outcomes. © BMJ Publishing Group Ltd (unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Rapid code acquisition algorithms employing PN matched filters
NASA Technical Reports Server (NTRS)
Su, Yu T.
1988-01-01
The performance of four algorithms using pseudonoise matched filters (PNMFs), for direct-sequence spread-spectrum systems, is analyzed. They are: parallel search with fix dwell detector (PL-FDD), parallel search with sequential detector (PL-SD), parallel-serial search with fix dwell detector (PS-FDD), and parallel-serial search with sequential detector (PS-SD). The operation characteristic for each detector and the mean acquisition time for each algorithm are derived. All the algorithms are studied in conjunction with the noncoherent integration technique, which enables the system to operate in the presence of data modulation. Several previous proposals using PNMF are seen as special cases of the present algorithms.
Method for rapid base sequencing in DNA and RNA with two base labeling
Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.
1995-04-11
A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.
Method for rapid base sequencing in DNA and RNA with two base labeling
Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.
1995-01-01
Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.
ERIC Educational Resources Information Center
Jeong, Allan
2005-01-01
This paper proposes a set of methods and a framework for evaluating, modeling, and predicting group interactions in computer-mediated communication. The method of sequential analysis is described along with specific software tools and techniques to facilitate the analysis of message-response sequences. In addition, the Dialogic Theory and its…
Optical Computing Based on Neuronal Models
1988-05-01
walking, and cognition are far too complex for existing sequential digital computers. Therefore new architectures, hardware, and algorithms modeled...collective behavior, and iterative processing into optical processing and artificial neurodynamical systems. Another intriguing promise of neural nets is...with architectures, implementations, and programming; and material research s -7- called for. Our future research in neurodynamics will continue to
NASA Astrophysics Data System (ADS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
Dissociating hippocampal and striatal contributions to sequential prediction learning
Bornstein, Aaron M.; Daw, Nathaniel D.
2011-01-01
Behavior may be generated on the basis of many different kinds of learned contingencies. For instance, responses could be guided by the direct association between a stimulus and response, or by sequential stimulus-stimulus relationships (as in model-based reinforcement learning or goal-directed actions). However, the neural architecture underlying sequential predictive learning is not well-understood, in part because it is difficult to isolate its effect on choice behavior. To track such learning more directly, we examined reaction times (RTs) in a probabilistic sequential picture identification task. We used computational learning models to isolate trial-by-trial effects of two distinct learning processes in behavior, and used these as signatures to analyze the separate neural substrates of each process. RTs were best explained via the combination of two delta rule learning processes with different learning rates. To examine neural manifestations of these learning processes, we used functional magnetic resonance imaging to seek correlates of timeseries related to expectancy or surprise. We observed such correlates in two regions, hippocampus and striatum. By estimating the learning rates best explaining each signal, we verified that they were uniquely associated with one of the two distinct processes identified behaviorally. These differential correlates suggest that complementary anticipatory functions drive each region's effect on behavior. Our results provide novel insights as to the quantitative computational distinctions between medial temporal and basal ganglia learning networks and enable experiments that exploit trial-by-trial measurement of the unique contributions of both hippocampus and striatum to response behavior. PMID:22487032
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Feiertag, R.; Samii, M. V.; Doll, C. E.; Mistretta, G. D.; Hart, R. C.
1993-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD) commissioned Applied Technology Associates, Incorporated, to develop the Real-Time Orbit Determination/Enhanced (RTOD/E) system on a Disk Operating System (DOS)-based personal computer (PC) as a prototype system for sequential orbit determination of spacecraft. This paper presents the results of a study to compare the orbit determination accuracy for a Tracking and Data Relay Satellite (TDRS) System (TDRSS) user spacecraft, Landsat-4, obtained using RTOD/E, operating on a PC, with the accuracy of an established batch least-squares system, the Goddard Trajectory Determination System (GTDS), operating on a mainframe computer. The results of Landsat-4 orbit determination will provide useful experience for the Earth Observing System (EOS) series of satellites. The Landsat-4 ephemerides were estimated for the May 18-24, 1992, timeframe, during which intensive TDRSS tracking data for Landsat-4 were available. During this period, there were two separate orbit-adjust maneuvers on one of the TDRSS spacecraft (TDRS-East) and one small orbit-adjust maneuver for Landsat-4. Independent assessments were made of the consistencies (overlap comparisons for the batch case and covariances and the first measurement residuals for the sequential case) of solutions produced by the batch and sequential methods. The forward-filtered RTOD/E orbit solutions were compared with the definitive GTDS orbit solutions for Landsat-4; the solution differences were generally less than 30 meters after the filter had reached steady state.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
Two-proton capture on the 68Se nucleus with a new self-consistent cluster model
NASA Astrophysics Data System (ADS)
Hove, D.; Garrido, E.; Jensen, A. S.; Sarriguren, P.; Fynbo, H. O. U.; Fedorov, D. V.; Zinner, N. T.
2018-07-01
We investigate the two-proton capture reaction of the prominent rapid proton capture waiting point nucleus, 68Se, that produces the borromean nucleus 70Kr (68Se + p + p). We apply a recently formulated general model where the core nucleus, 68Se, is treated in the mean-field approximation and the three-body problem of the two valence protons and the core is solved exactly. We compare using two popular Skyrme interactions, SLy4 and SkM*. We calculate E2 electromagnetic two-proton dissociation and capture cross sections, and derive the temperature dependent capture rates. We vary the unknown 2+ resonance energy without changing any of the structures computed self-consistently for both core and valence particles. We find rates increasing quickly with temperature below 2-4 GK after which we find rates varying by about a factor of two independent of 2+ resonance energy. The capture mechanism is sequential through the f5/2 proton-core resonance, but the continuum background contributes significantly.
An algorithm of discovering signatures from DNA databases on a computer cluster.
Lee, Hsiao Ping; Sheu, Tzu-Fang
2014-10-05
Signatures are short sequences that are unique and not similar to any other sequence in a database that can be used as the basis to identify different species. Even though several signature discovery algorithms have been proposed in the past, these algorithms require the entirety of databases to be loaded in the memory, thus restricting the amount of data that they can process. It makes those algorithms unable to process databases with large amounts of data. Also, those algorithms use sequential models and have slower discovery speeds, meaning that the efficiency can be improved. In this research, we are debuting the utilization of a divide-and-conquer strategy in signature discovery and have proposed a parallel signature discovery algorithm on a computer cluster. The algorithm applies the divide-and-conquer strategy to solve the problem posed to the existing algorithms where they are unable to process large databases and uses a parallel computing mechanism to effectively improve the efficiency of signature discovery. Even when run with just the memory of regular personal computers, the algorithm can still process large databases such as the human whole-genome EST database which were previously unable to be processed by the existing algorithms. The algorithm proposed in this research is not limited by the amount of usable memory and can rapidly find signatures in large databases, making it useful in applications such as Next Generation Sequencing and other large database analysis and processing. The implementation of the proposed algorithm is available at http://www.cs.pu.edu.tw/~fang/DDCSDPrograms/DDCSD.htm.
Wendt, D; Schmidt, D; Wasserfuhr, D; Osswald, B; Thielmann, M; Tossios, P; Kühl, H; Jakob, H; Massoudy, P
2010-09-01
The superiority of left internal thoracic artery (LITA) grafting to the left anterior descending artery (LAD) is well established. Patency rates of 80%-90% have been reported at 10-year follow-up. However, the superiority of sequential LITA grafting has not been proven. Our aim was to compare patency rates after sequential LITA grafting to a diagonal branch and the LAD with patency rates of LITA grafting to the LAD and separate vein grafting to a diagonal branch. A total of 58 coronary artery bypass graft (CABG) patients, operated on between 01/2000 and 12/2002, underwent multi-slice computed tomography (MSCT) between 2006 and 2008. Of these patients, 29 had undergone sequential LITA grafting to a diagonal branch and to the LAD ("Sequential" Group), while in 29 the LAD and a diagonal branch were separately grafted with LITA and vein ("Separate" Group). Patencies of all anastomoses were investigated. Mean follow-up was 1958±208 days. The patency rate of the LAD anastomosis was 100% in the Sequential Group and 93% in the Separate Group (p=0.04). The patency rate of the diagonal branch anastomosis was 100% in the Sequential Group and 89% in the Separate Group (p=0.04). Mean intraoperative flow on LITA graft was not different between groups (69±8ml/min in the Sequential Group and 68±9ml/min in the Separate Group, p=n.s.). Patency rates of both the LAD and the diagonal branch anastomoses were higher after sequential arterial grafting compared with separate arterial and venous grafting at 5-year follow-up. This indicates that, with regard to the antero-lateral wall of the left ventricle, there is an advantage to sequential arterial grafting compared with separate arterial and venous grafting.
Exact parallel algorithms for some members of the traveling salesman problem family
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pekny, J.F.
1989-01-01
The traveling salesman problem and its many generalizations comprise one of the best known combinatorial optimization problem families. Most members of the family are NP-complete problems so that exact algorithms require an unpredictable and sometimes large computational effort. Parallel computers offer hope for providing the power required to meet these demands. A major barrier to applying parallel computers is the lack of parallel algorithms. The contributions presented in this thesis center around new exact parallel algorithms for the asymmetric traveling salesman problem (ATSP), prize collecting traveling salesman problem (PCTSP), and resource constrained traveling salesman problem (RCTSP). The RCTSP is amore » particularly difficult member of the family since finding a feasible solution is an NP-complete problem. An exact sequential algorithm is also presented for the directed hamiltonian cycle problem (DHCP). The DHCP algorithm is superior to current heuristic approaches and represents the first exact method applicable to large graphs. Computational results presented for each of the algorithms demonstrates the effectiveness of combining efficient algorithms with parallel computing methods. Performance statistics are reported for randomly generated ATSPs with 7,500 cities, PCTSPs with 200 cities, RCTSPs with 200 cities, DHCPs with 3,500 vertices, and assignment problems of size 10,000. Sequential results were collected on a Sun 4/260 engineering workstation, while parallel results were collected using a 14 and 100 processor BBN Butterfly Plus computer. The computational results represent the largest instances ever solved to optimality on any type of computer.« less
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
NASA Astrophysics Data System (ADS)
Zhang, G.
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation.
Zhang, G
2018-04-01
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation" limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles and could thus determine the saturation density of spheres with high accuracy. In this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensional polygons. We also calculate the saturation density for regular polygons of three to ten sides and obtain results that are consistent with previous, extrapolation-based studies.
Fast-responding liquid crystal light-valve technology for color-sequential display applications
NASA Astrophysics Data System (ADS)
Janssen, Peter J.; Konovalov, Victor A.; Muravski, Anatoli A.; Yakovenko, Sergei Y.
1996-04-01
A color sequential projection system has some distinct advantages over conventional systems which make it uniquely suitable for consumer TV as well as high performance professional applications such as computer monitors and electronic cinema. A fast responding light-valve is, clearly, essential for a good performing system. Response speed of transmissive LC lightvalves has been marginal thus far for good color rendition. Recently, Sevchenko Institute has made some very fast reflective LC cells which were evaluated at Philips Labs. These devices showed sub millisecond-large signal-response times, even at room temperature, and produced good color in a projector emulation testbed. In our presentation we describe our highly efficient color sequential projector and demonstrate its operation on video tape. Next we discuss light-valve requirements and reflective light-valve test results.
Bursts and heavy tails in temporal and sequential dynamics of foraging decisions.
Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D; Jeong, Jaeseung
2014-08-01
A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices.
Attentional Capture by Emotional Stimuli Is Modulated by Semantic Processing
ERIC Educational Resources Information Center
Huang, Yang-Ming; Baddeley, Alan; Young, Andrew W.
2008-01-01
The attentional blink paradigm was used to examine whether emotional stimuli always capture attention. The processing requirement for emotional stimuli in a rapid sequential visual presentation stream was manipulated to investigate the circumstances under which emotional distractors capture attention, as reflected in an enhanced attentional blink…
Lagator, Mato; Colegrave, Nick; Neve, Paul
2014-11-07
In rapidly changing environments, selection history may impact the dynamics of adaptation. Mutations selected in one environment may result in pleiotropic fitness trade-offs in subsequent novel environments, slowing the rates of adaptation. Epistatic interactions between mutations selected in sequential stressful environments may slow or accelerate subsequent rates of adaptation, depending on the nature of that interaction. We explored the dynamics of adaptation during sequential exposure to herbicides with different modes of action in Chlamydomonas reinhardtii. Evolution of resistance to two of the herbicides was largely independent of selection history. For carbetamide, previous adaptation to other herbicide modes of action positively impacted the likelihood of adaptation to this herbicide. Furthermore, while adaptation to all individual herbicides was associated with pleiotropic fitness costs in stress-free environments, we observed that accumulation of resistance mechanisms was accompanied by a reduction in overall fitness costs. We suggest that antagonistic epistasis may be a driving mechanism that enables populations to more readily adapt in novel environments. These findings highlight the potential for sequences of xenobiotics to facilitate the rapid evolution of multiple-drug and -pesticide resistance, as well as the potential for epistatic interactions between adaptive mutations to facilitate evolutionary rescue in rapidly changing environments. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Gautam, Pawan; Valiathan, Ashima; Adhikari, Raviraj
2007-07-01
The purpose of this finite element study was to evaluate stress distribution along craniofacial sutures and displacement of various craniofacial structures with rapid maxillary expansion (RME) therapy. The analytic model for this study was developed from sequential computed tomography scan images taken at 2.5-mm intervals of a dry young human skull. Subsequently, a finite element method model was developed from computed tomography images by using AutoCAD software (2004 version, Autodesk, Inc, San Rafael, Calif) and ANSYS software (version 10, Belcan Engineering Group, Downers Grove, Ill). The maxilla moved anteriorly and downward and rotated clockwise in response to RME. The pterygoid plates were displaced laterally. The distant structures of the craniofacial skeleton--zygomatic bone, temporal bone, and frontal bone--were also affected by transverse orthopedic forces. The center of rotation of the maxilla in the X direction was somewhere between the lateral and the medial pterygoid plates. In the frontal plane, the center of rotation of the maxilla was approximately at the superior orbital fissure. The maximum von Mises stresses were found along the frontomaxillary, nasomaxillary, and frontonasal sutures. Both tensile and compressive stresses could be demonstrated along the same suture. RME facilitates expansion of the maxilla in both the molar and the canine regions. It also causes downward and forward displacement of the maxilla and thus can contribute to the correction of mild Class III malocclusion. The downward displacement and backward rotation of the maxilla could be a concern in patients with excessive lower anterior facial height. High stresses along the deep structures and the various sutures of the craniofacial skeleton signify the role of the circummaxillary sutural system in downward and forward displacement of the maxilla after RME.
Gong, Bo; Wu, Yuhao; O'Keeffe, Michael E; Berger, Ferco H; McLaughlin, Patrick D; Nicolaou, Savvas; Khosa, Faisal
2017-01-01
This study aims to identify the 50 most highly cited articles on dual energy computed tomography (DECT) in abdominal radiology Thomson Reuters Web of Science All Databases was queried without year or language restriction. Only original research articles with a primary focus on abdominal radiology using DECT were selected. Review articles, meta-analyses, and studies without human subjects were excluded. Fifty articles with the highest average yearly citation were identified. These articles were published between 2007 and 2017 in 12 journals, with the most in Radiology (12 articles). Articles had a median of 7 authors, with all first authors but one primarily affiliated to radiology departments. The United States of America produced the most articles (16), followed by Germany (13 articles), and China (7 articles). Most studies used Dual Source DECT technology (35 articles), followed by Rapid Kilovoltage Switching (14 articles), and Sequential Scanning (1 article). The top three scanned organs were the liver (24%), kidney (16%), and urinary tract (15%). The most commonly studied pathology was urinary calculi (28%), renal lesion/tumor (23%), and hepatic lesion/tumor (20%). Our study identifies intellectual milestones in the applications of DECT in abdominal radiology. The diversity of the articles reflects on the characteristics and quality of the most influential publications related to DECT.
Gong, Bo; Wu, Yuhao; O’Keeffe, Michael E; Berger, Ferco H; McLaughlin, Patrick D; Nicolaou, Savvas
2017-01-01
Summary This study aims to identify the 50 most highly cited articles on dual energy computed tomography (DECT) in abdominal radiology Thomson Reuters Web of Science All Databases was queried without year or language restriction. Only original research articles with a primary focus on abdominal radiology using DECT were selected. Review articles, meta-analyses, and studies without human subjects were excluded. Fifty articles with the highest average yearly citation were identified. These articles were published between 2007 and 2017 in 12 journals, with the most in Radiology (12 articles). Articles had a median of 7 authors, with all first authors but one primarily affiliated to radiology departments. The United States of America produced the most articles (16), followed by Germany (13 articles), and China (7 articles). Most studies used Dual Source DECT technology (35 articles), followed by Rapid Kilovoltage Switching (14 articles), and Sequential Scanning (1 article). The top three scanned organs were the liver (24%), kidney (16%), and urinary tract (15%). The most commonly studied pathology was urinary calculi (28%), renal lesion/tumor (23%), and hepatic lesion/tumor (20%). Our study identifies intellectual milestones in the applications of DECT in abdominal radiology. The diversity of the articles reflects on the characteristics and quality of the most influential publications related to DECT. PMID:29657641
Formalizing Neurath's ship: Approximate algorithms for online causal learning.
Bramley, Neil R; Dayan, Peter; Griffiths, Thomas L; Lagnado, David A
2017-04-01
Higher-level cognition depends on the ability to learn models of the world. We can characterize this at the computational level as a structure-learning problem with the goal of best identifying the prevailing causal relationships among a set of relata. However, the computational cost of performing exact Bayesian inference over causal models grows rapidly as the number of relata increases. This implies that the cognitive processes underlying causal learning must be substantially approximate. A powerful class of approximations that focuses on the sequential absorption of successive inputs is captured by the Neurath's ship metaphor in philosophy of science, where theory change is cast as a stochastic and gradual process shaped as much by people's limited willingness to abandon their current theory when considering alternatives as by the ground truth they hope to approach. Inspired by this metaphor and by algorithms for approximating Bayesian inference in machine learning, we propose an algorithmic-level model of causal structure learning under which learners represent only a single global hypothesis that they update locally as they gather evidence. We propose a related scheme for understanding how, under these limitations, learners choose informative interventions that manipulate the causal system to help elucidate its workings. We find support for our approach in the analysis of 3 experiments. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Acetylcholine-modulated plasticity in reward-driven navigation: a computational study.
Zannone, Sara; Brzosko, Zuzanna; Paulsen, Ole; Clopath, Claudia
2018-06-21
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Wald Sequential Probability Ratio Test for Space Object Conjunction Assessment
NASA Technical Reports Server (NTRS)
Carpenter, James R.; Markley, F Landis
2014-01-01
This paper shows how satellite owner/operators may use sequential estimates of collision probability, along with a prior assessment of the base risk of collision, in a compound hypothesis ratio test to inform decisions concerning collision risk mitigation maneuvers. The compound hypothesis test reduces to a simple probability ratio test, which appears to be a novel result. The test satisfies tolerances related to targeted false alarm and missed detection rates. This result is independent of the method one uses to compute the probability density that one integrates to compute collision probability. A well-established test case from the literature shows that this test yields acceptable results within the constraints of a typical operational conjunction assessment decision timeline. Another example illustrates the use of the test in a practical conjunction assessment scenario based on operations of the International Space Station.
NASA Astrophysics Data System (ADS)
Svensson, Andreas; Schön, Thomas B.; Lindsten, Fredrik
2018-05-01
Probabilistic (or Bayesian) modeling and learning offers interesting possibilities for systematic representation of uncertainty using probability theory. However, probabilistic learning often leads to computationally challenging problems. Some problems of this type that were previously intractable can now be solved on standard personal computers thanks to recent advances in Monte Carlo methods. In particular, for learning of unknown parameters in nonlinear state-space models, methods based on the particle filter (a Monte Carlo method) have proven very useful. A notoriously challenging problem, however, still occurs when the observations in the state-space model are highly informative, i.e. when there is very little or no measurement noise present, relative to the amount of process noise. The particle filter will then struggle in estimating one of the basic components for probabilistic learning, namely the likelihood p (data | parameters). To this end we suggest an algorithm which initially assumes that there is substantial amount of artificial measurement noise present. The variance of this noise is sequentially decreased in an adaptive fashion such that we, in the end, recover the original problem or possibly a very close approximation of it. The main component in our algorithm is a sequential Monte Carlo (SMC) sampler, which gives our proposed method a clear resemblance to the SMC2 method. Another natural link is also made to the ideas underlying the approximate Bayesian computation (ABC). We illustrate it with numerical examples, and in particular show promising results for a challenging Wiener-Hammerstein benchmark problem.
Progressive data transmission for anatomical landmark detection in a cloud.
Sofka, M; Ralovich, K; Zhang, J; Zhou, S K; Comaniciu, D
2012-01-01
In the concept of cloud-computing-based systems, various authorized users have secure access to patient records from a number of care delivery organizations from any location. This creates a growing need for remote visualization, advanced image processing, state-of-the-art image analysis, and computer aided diagnosis. This paper proposes a system of algorithms for automatic detection of anatomical landmarks in 3D volumes in the cloud computing environment. The system addresses the inherent problem of limited bandwidth between a (thin) client, data center, and data analysis server. The problem of limited bandwidth is solved by a hierarchical sequential detection algorithm that obtains data by progressively transmitting only image regions required for processing. The client sends a request to detect a set of landmarks for region visualization or further analysis. The algorithm running on the data analysis server obtains a coarse level image from the data center and generates landmark location candidates. The candidates are then used to obtain image neighborhood regions at a finer resolution level for further detection. This way, the landmark locations are hierarchically and sequentially detected and refined. Only image regions surrounding landmark location candidates need to be trans- mitted during detection. Furthermore, the image regions are lossy compressed with JPEG 2000. Together, these properties amount to at least 30 times bandwidth reduction while achieving similar accuracy when compared to an algorithm using the original data. The hierarchical sequential algorithm with progressive data transmission considerably reduces bandwidth requirements in cloud-based detection systems.
Microfluidic Capillaric Circuit for Rapid and Facile Bacteria Detection.
Olanrewaju, Ayokunle Oluwafemi; Ng, Andy; DeCorwin-Martin, Philippe; Robillard, Alessandra; Juncker, David
2017-06-20
Urinary tract infections (UTI) are one of the most common bacterial infections and would greatly benefit from a rapid point-of-care diagnostic test. Although significant progress has been made in developing microfluidic systems for nucleic acid and whole bacteria immunoassay tests, their practical application is limited by complex protocols, bulky peripherals, and slow operation. Here we present a microfluidic capillaric circuit (CC) optimized for rapid and automated detection of bacteria in urine. Molds for CCs were constructed using previously established design rules, then 3D-printed and replicated into poly(dimethylsiloxane). CCs autonomously and sequentially performed all liquid delivery steps required for the assay. For efficient bacteria capture, on-the-spot packing of antibody-functionalized microbeads was completed in <20 s followed by autonomous sequential delivery of 100 μL of bacteria sample, biotinylated detection antibodies, fluorescent streptavidin conjugate, and wash buffer for a total volume ≈115 μL. The assay was completed in <7 min. Fluorescence images of the microbead column revealed captured bacteria as bright spots that were easily counted manually or using an automated script for user-independent assay readout. The limit of detection of E. coli in synthetic urine was 1.2 × 10 2 colony-forming-units per mL (CFU/mL), which is well below the clinical diagnostic criterion (>10 5 CFU/mL) for UTI. The self-powered, peripheral-free CC presented here has potential for use in rapid point-of-care UTI screening.
Rapid Sequential in Situ Multiplexing with DNA Exchange Imaging in Neuronal Cells and Tissues.
Wang, Yu; Woehrstein, Johannes B; Donoghue, Noah; Dai, Mingjie; Avendaño, Maier S; Schackmann, Ron C J; Zoeller, Jason J; Wang, Shan Shan H; Tillberg, Paul W; Park, Demian; Lapan, Sylvain W; Boyden, Edward S; Brugge, Joan S; Kaeser, Pascal S; Church, George M; Agasti, Sarit S; Jungmann, Ralf; Yin, Peng
2017-10-11
To decipher the molecular mechanisms of biological function, it is critical to map the molecular composition of individual cells or even more importantly tissue samples in the context of their biological environment in situ. Immunofluorescence (IF) provides specific labeling for molecular profiling. However, conventional IF methods have finite multiplexing capabilities due to spectral overlap of the fluorophores. Various sequential imaging methods have been developed to circumvent this spectral limit but are not widely adopted due to the common limitation of requiring multirounds of slow (typically over 2 h at room temperature to overnight at 4 °C in practice) immunostaining. We present here a practical and robust method, which we call DNA Exchange Imaging (DEI), for rapid in situ spectrally unlimited multiplexing. This technique overcomes speed restrictions by allowing for single-round immunostaining with DNA-barcoded antibodies, followed by rapid (less than 10 min) buffer exchange of fluorophore-bearing DNA imager strands. The programmability of DEI allows us to apply it to diverse microscopy platforms (with Exchange Confocal, Exchange-SIM, Exchange-STED, and Exchange-PAINT demonstrated here) at multiple desired resolution scales (from ∼300 nm down to sub-20 nm). We optimized and validated the use of DEI in complex biological samples, including primary neuron cultures and tissue sections. These results collectively suggest DNA exchange as a versatile, practical platform for rapid, highly multiplexed in situ imaging, potentially enabling new applications ranging from basic science, to drug discovery, and to clinical pathology.
Treating convection in sequential solvers
NASA Technical Reports Server (NTRS)
Shyy, Wei; Thakur, Siddharth
1992-01-01
The treatment of the convection terms in the sequential solver, a standard procedure found in virtually all pressure based algorithms, to compute the flow problems with sharp gradients and source terms is investigated. Both scalar model problems and one-dimensional gas dynamics equations have been used to study the various issues involved. Different approaches including the use of nonlinear filtering techniques and adoption of TVD type schemes have been investigated. Special treatments of the source terms such as pressure gradients and heat release have also been devised, yielding insight and improved accuracy of the numerical procedure adopted.
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.
Fast Acceleration of 2D Wave Propagation Simulations Using Modern Computational Accelerators
Wang, Wei; Xu, Lifan; Cavazos, John; Huang, Howie H.; Kay, Matthew
2014-01-01
Recent developments in modern computational accelerators like Graphics Processing Units (GPUs) and coprocessors provide great opportunities for making scientific applications run faster than ever before. However, efficient parallelization of scientific code using new programming tools like CUDA requires a high level of expertise that is not available to many scientists. This, plus the fact that parallelized code is usually not portable to different architectures, creates major challenges for exploiting the full capabilities of modern computational accelerators. In this work, we sought to overcome these challenges by studying how to achieve both automated parallelization using OpenACC and enhanced portability using OpenCL. We applied our parallelization schemes using GPUs as well as Intel Many Integrated Core (MIC) coprocessor to reduce the run time of wave propagation simulations. We used a well-established 2D cardiac action potential model as a specific case-study. To the best of our knowledge, we are the first to study auto-parallelization of 2D cardiac wave propagation simulations using OpenACC. Our results identify several approaches that provide substantial speedups. The OpenACC-generated GPU code achieved more than speedup above the sequential implementation and required the addition of only a few OpenACC pragmas to the code. An OpenCL implementation provided speedups on GPUs of at least faster than the sequential implementation and faster than a parallelized OpenMP implementation. An implementation of OpenMP on Intel MIC coprocessor provided speedups of with only a few code changes to the sequential implementation. We highlight that OpenACC provides an automatic, efficient, and portable approach to achieve parallelization of 2D cardiac wave simulations on GPUs. Our approach of using OpenACC, OpenCL, and OpenMP to parallelize this particular model on modern computational accelerators should be applicable to other computational models of wave propagation in multi-dimensional media. PMID:24497950
Evolving binary classifiers through parallel computation of multiple fitness cases.
Cagnoni, Stefano; Bergenti, Federico; Mordonini, Monica; Adorni, Giovanni
2005-06-01
This paper describes two versions of a novel approach to developing binary classifiers, based on two evolutionary computation paradigms: cellular programming and genetic programming. Such an approach achieves high computation efficiency both during evolution and at runtime. Evolution speed is optimized by allowing multiple solutions to be computed in parallel. Runtime performance is optimized explicitly using parallel computation in the case of cellular programming or implicitly taking advantage of the intrinsic parallelism of bitwise operators on standard sequential architectures in the case of genetic programming. The approach was tested on a digit recognition problem and compared with a reference classifier.
Domingues, Carla Magda Allan S.; de Fátima Pereira, Sirlene; Marreiros, Ana Carolina Cunha; Menezes, Nair; Flannery, Brendan
2015-01-01
In August 2012, the Brazilian Ministry of Health introduced inactivated polio vaccine (IPV) as part of sequential polio vaccination schedule for all infants beginning their primary vaccination series. The revised childhood immunization schedule included 2 doses of IPV at 2 and 4 months of age followed by 2 doses of oral polio vaccine (OPV) at 6 and 15 months of age. One annual national polio immunization day was maintained to provide OPV to all children aged 6 to 59 months. The decision to introduce IPV was based on preventing rare cases of vaccine-associated paralytic polio, financially sustaining IPV introduction, ensuring equitable access to IPV, and preparing for future OPV cessation following global eradication. Introducing IPV during a national multivaccination campaign led to rapid uptake, despite challenges with local vaccine supply due to high wastage rates. Continuous monitoring is required to achieve high coverage with the sequential polio vaccine schedule. PMID:25316829
NASA Astrophysics Data System (ADS)
Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.
2017-02-01
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
Perkins, R; Williamson, C; Lavaud, J; Mouget, J-L; Campbell, D A
2018-04-16
Photoacclimation by strains of Haslea "blue" diatom species H. ostrearia and H. silbo sp. nov. ined. was investigated with rapid light curves and induction-recovery curves using fast repetition rate fluorescence. Cultures were grown to exponential phase under 50 µmol m -2 s -1 photosynthetic available radiation (PAR) and then exposed to non-sequential rapid light curves where, once electron transport rate (ETR) had reached saturation, light intensity was decreased and then further increased prior to returning to near growth light intensity. The non-sequential rapid light curve revealed that ETR was not proportional to the instantaneously applied light intensity, due to rapid photoacclimation. Changes in the effective absorption cross sections for open PSII reaction centres (σ PSII ') or reaction centre connectivity (ρ) did not account for the observed increases in ETR under extended high light. σ PSII ' in fact decreased as a function of a time-dependent induction of regulated excitation dissipation Y(NPQ), once cells were at or above a PAR coinciding with saturation of ETR. Instead, the observed increases in ETR under extended high light were explained by an increase in the rate of PSII reopening, i.e. Q A - oxidation. This acceleration of electron transport was strictly light dependent and relaxed within seconds after a return to low light or darkness. The time-dependent nature of ETR upregulation and regulated NPQ induction was verified using induction-recovery curves. Our findings show a time-dependent induction of excitation dissipation, in parallel with very rapid photoacclimation of electron transport, which combine to make ETR independent of short-term changes in PAR. This supports a selective advantage for these diatoms when exposed to fluctuating light in their environment.
ERIC Educational Resources Information Center
Kabadayi, Abdulkadir
2006-01-01
Language, as is known, is acquired under certain conditions: rapid and sequential brain maturation and cognitive development, the need to exchange information and to control others' actions, and an exposure to appropriate speech input. This research aims at analyzing preschoolers' overgeneralizations of the object labeling process in different…
Lecture Recording: Structural and Symbolic Information vs. Flexibility of Presentation
ERIC Educational Resources Information Center
Stolzenberg, Daniel; Pforte, Stefan
2007-01-01
Rapid eLearning is an ongoing trend which enables flexible and cost-effective creation of learning materials. Especially, lecture recording has turned out to be a lightweight method particularly suited for existing lectures and blended learning strategies. In order to not only sequentially play back but offer full fledged navigation, search and…
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, G.
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less
On the error probability of general tree and trellis codes with applications to sequential decoding
NASA Technical Reports Server (NTRS)
Johannesson, R.
1973-01-01
An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Precise algorithm to generate random sequential adsorption of hard polygons at saturation
Zhang, G.
2018-04-30
Random sequential adsorption (RSA) is a time-dependent packing process, in which particles of certain shapes are randomly and sequentially placed into an empty space without overlap. In the infinite-time limit, the density approaches a "saturation'' limit. Although this limit has attracted particular research interest, the majority of past studies could only probe this limit by extrapolation. We have previously found an algorithm to reach this limit using finite computational time for spherical particles, and could thus determine the saturation density of spheres with high accuracy. Here in this paper, we generalize this algorithm to generate saturated RSA packings of two-dimensionalmore » polygons. We also calculate the saturation density for regular polygons of three to ten sides, and obtain results that are consistent with previous, extrapolation-based studies.« less
NASA Technical Reports Server (NTRS)
Oza, D. H.; Jones, T. L.; Hodjatzadeh, M.; Samii, M. V.; Doll, C. E.; Hart, R. C.; Mistretta, G. D.
1991-01-01
The development of the Real-Time Orbit Determination/Enhanced (RTOD/E) system as a prototype system for sequential orbit determination on a Disk Operating System (DOS) based Personal Computer (PC) is addressed. The results of a study to compare the orbit determination accuracy of a Tracking and Data Relay Satellite System (TDRSS) user spacecraft obtained using RTOD/E with the accuracy of an established batch least squares system, the Goddard Trajectory Determination System (GTDS), is addressed. Independent assessments were made to examine the consistencies of results obtained by the batch and sequential methods. Comparisons were made between the forward filtered RTOD/E orbit solutions and definitive GTDS orbit solutions for the Earth Radiation Budget Satellite (ERBS); the maximum solution differences were less than 25 m after the filter had reached steady state.
Accurate Reading with Sequential Presentation of Single Letters
Price, Nicholas S. C.; Edwards, Gemma L.
2012-01-01
Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 wpm and accuracies of over 90% with the single letter reading (SLR) method and naive participants achieved average reading rates over 30 wpm with greater than 90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses. PMID:23115548
An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.
Camien, M N; Warner, R C
1984-05-01
A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.
A system for the input and storage of data in the Besm-6 digital computer
NASA Technical Reports Server (NTRS)
Schmidt, K.; Blenke, L.
1975-01-01
Computer programs used for the decoding and storage of large volumes of data on the the BESM-6 computer are described. The following factors are discussed: the programming control language allows the programs to be run as part of a modular programming system used in data processing; data control is executed in a hierarchically built file on magnetic tape with sequential index storage; and the programs are not dependent on the structure of the data.
A Characterization of t/s-Diagnosability and Sequential t-Diagnosability in Designs
1990-10-01
41 151 161 171 181 r91 1101 REFERENCES K.-Y. Chwa and S. L. Hakimi, “On fault identification in diagnosable systems,” ZEEE Tmns. Comput...1975, pp. 167-170. S. L. Hakimi and A. T. Amin, “Characterization of the connection assignment problem of diagnosable systems,” ZEEE Trans. Comput...S. Karunanithi and A. D. Friedman, “Analysis of digital systems using a new measure of system diagnosis,” ZEEE Trans. Cornput., vol. C- A
Computer simulation of a space SAR using a range-sequential processor for soil moisture mapping
NASA Technical Reports Server (NTRS)
Fujita, M.; Ulaby, F. (Principal Investigator)
1982-01-01
The ability of a spaceborne synthetic aperture radar (SAR) to detect soil moisture was evaluated by means of a computer simulation technique. The computer simulation package includes coherent processing of the SAR data using a range-sequential processor, which can be set up through hardware implementations, thereby reducing the amount of telemetry involved. With such a processing approach, it is possible to monitor the earth's surface on a continuous basis, since data storage requirements can be easily met through the use of currently available technology. The Development of the simulation package is described, followed by an examination of the application of the technique to actual environments. The results indicate that in estimating soil moisture content with a four-look processor, the difference between the assumed and estimated values of soil moisture is within + or - 20% of field capacity for 62% of the pixels for agricultural terrain and for 53% of the pixels for hilly terrain. The estimation accuracy for soil moisture may be improved by reducing the effect of fading through non-coherent averaging.
Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Moving Sound Source Localization Based on Sequential Subspace Estimation in Actual Room Environments
NASA Astrophysics Data System (ADS)
Tsuji, Daisuke; Suyama, Kenji
This paper presents a novel method for moving sound source localization and its performance evaluation in actual room environments. The method is based on the MUSIC (MUltiple SIgnal Classification) which is one of the most high resolution localization methods. When using the MUSIC, a computation of eigenvectors of correlation matrix is required for the estimation. It needs often a high computational costs. Especially, in the situation of moving source, it becomes a crucial drawback because the estimation must be conducted at every the observation time. Moreover, since the correlation matrix varies its characteristics due to the spatial-temporal non-stationarity, the matrix have to be estimated using only a few observed samples. It makes the estimation accuracy degraded. In this paper, the PAST (Projection Approximation Subspace Tracking) is applied for sequentially estimating the eigenvectors spanning the subspace. In the PAST, the eigen-decomposition is not required, and therefore it is possible to reduce the computational costs. Several experimental results in the actual room environments are shown to present the superior performance of the proposed method.
Performance Trend of Different Algorithms for Structural Design Optimization
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.
1996-01-01
Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.
Rispin, Amy; Farrar, David; Margosches, Elizabeth; Gupta, Kailash; Stitzel, Katherine; Carr, Gregory; Greene, Michael; Meyer, William; McCall, Deborah
2002-01-01
The authors have developed an improved version of the up-and-down procedure (UDP) as one of the replacements for the traditional acute oral toxicity test formerly used by the Organisation for Economic Co-operation and Development member nations to characterize industrial chemicals, pesticides, and their mixtures. This method improves the performance of acute testing for applications that use the median lethal dose (classic LD50) test while achieving significant reductions in animal use. It uses sequential dosing, together with sophisticated computer-assisted computational methods during the execution and calculation phases of the test. Staircase design, a form of sequential test design, can be applied to acute toxicity testing with its binary experimental endpoints (yes/no outcomes). The improved UDP provides a point estimate of the LD50 and approximate confidence intervals in addition to observed toxic signs for the substance tested. It does not provide information about the dose-response curve. Computer simulation was used to test performance of the UDP without the need for additional laboratory validation.
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
An approach to simultaneous interpretation of objects in complex structures so as to maximize a combined utility function is presented. Results of the application of a computer software system to assign meaning to regions in a segmented image based on the principles described in this paper and on a special interactive sequential classification learning system, which is referenced, are demonstrated.
Sequential visibility-graph motifs
NASA Astrophysics Data System (ADS)
Iacovacci, Jacopo; Lacasa, Lucas
2016-04-01
Visibility algorithms transform time series into graphs and encode dynamical information in their topology, paving the way for graph-theoretical time series analysis as well as building a bridge between nonlinear dynamics and network science. In this work we introduce and study the concept of sequential visibility-graph motifs, smaller substructures of n consecutive nodes that appear with characteristic frequencies. We develop a theory to compute in an exact way the motif profiles associated with general classes of deterministic and stochastic dynamics. We find that this simple property is indeed a highly informative and computationally efficient feature capable of distinguishing among different dynamics and robust against noise contamination. We finally confirm that it can be used in practice to perform unsupervised learning, by extracting motif profiles from experimental heart-rate series and being able, accordingly, to disentangle meditative from other relaxation states. Applications of this general theory include the automatic classification and description of physical, biological, and financial time series.
The impact of fillers on lineup performance.
Wetmore, Stacy A; McAdoo, Ryan M; Gronlund, Scott D; Neuschatz, Jeffrey S
2017-01-01
Filler siphoning theory posits that the presence of fillers (known innocents) in a lineup protects an innocent suspect from being chosen by siphoning choices away from that innocent suspect. This mechanism has been proposed as an explanation for why simultaneous lineups (viewing all lineup members at once) induces better performance than showups (one-person identification procedures). We implemented filler siphoning in a computational model (WITNESS, Clark, Applied Cognitive Psychology 17:629-654, 2003), and explored the impact of the number of fillers (lineup size) and filler quality on simultaneous and sequential lineups (viewing lineups members in sequence), and compared both to showups. In limited situations, we found that filler siphoning can produce a simultaneous lineup performance advantage, but one that is insufficient in magnitude to explain empirical data. However, the magnitude of the empirical simultaneous lineup advantage can be approximated once criterial variability is added to the model. But this modification works by negatively impacting showups rather than promoting more filler siphoning. In sequential lineups, fillers were found to harm performance. Filler siphoning fails to clarify the relationship between simultaneous lineups and sequential lineups or showups. By incorporating constructs like filler siphoning and criterial variability into a computational model, and trying to approximate empirical data, we can sort through explanations of eyewitness decision-making, a prerequisite for policy recommendations.
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
Bursts and Heavy Tails in Temporal and Sequential Dynamics of Foraging Decisions
Jung, Kanghoon; Jang, Hyeran; Kralik, Jerald D.; Jeong, Jaeseung
2014-01-01
A fundamental understanding of behavior requires predicting when and what an individual will choose. However, the actual temporal and sequential dynamics of successive choices made among multiple alternatives remain unclear. In the current study, we tested the hypothesis that there is a general bursting property in both the timing and sequential patterns of foraging decisions. We conducted a foraging experiment in which rats chose among four different foods over a continuous two-week time period. Regarding when choices were made, we found bursts of rapidly occurring actions, separated by time-varying inactive periods, partially based on a circadian rhythm. Regarding what was chosen, we found sequential dynamics in affective choices characterized by two key features: (a) a highly biased choice distribution; and (b) preferential attachment, in which the animals were more likely to choose what they had previously chosen. To capture the temporal dynamics, we propose a dual-state model consisting of active and inactive states. We also introduce a satiation-attainment process for bursty activity, and a non-homogeneous Poisson process for longer inactivity between bursts. For the sequential dynamics, we propose a dual-control model consisting of goal-directed and habit systems, based on outcome valuation and choice history, respectively. This study provides insights into how the bursty nature of behavior emerges from the interaction of different underlying systems, leading to heavy tails in the distribution of behavior over time and choices. PMID:25122498
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
On mining complex sequential data by means of FCA and pattern structures
NASA Astrophysics Data System (ADS)
Buzmakov, Aleksey; Egho, Elias; Jay, Nicolas; Kuznetsov, Sergei O.; Napoli, Amedeo; Raïssi, Chedy
2016-02-01
Nowadays data-sets are available in very complex and heterogeneous ways. Mining of such data collections is essential to support many real-world applications ranging from healthcare to marketing. In this work, we focus on the analysis of "complex" sequential data by means of interesting sequential patterns. We approach the problem using the elegant mathematical framework of formal concept analysis and its extension based on "pattern structures". Pattern structures are used for mining complex data (such as sequences or graphs) and are based on a subsumption operation, which in our case is defined with respect to the partial order on sequences. We show how pattern structures along with projections (i.e. a data reduction of sequential structures) are able to enumerate more meaningful patterns and increase the computing efficiency of the approach. Finally, we show the applicability of the presented method for discovering and analysing interesting patient patterns from a French healthcare data-set on cancer. The quantitative and qualitative results (with annotations and analysis from a physician) are reported in this use-case which is the main motivation for this work.
Forecasting daily streamflow using online sequential extreme learning machines
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
Parallelization and implementation of approximate root isolation for nonlinear system by Monte Carlo
NASA Astrophysics Data System (ADS)
Khosravi, Ebrahim
1998-12-01
This dissertation solves a fundamental problem of isolating the real roots of nonlinear systems of equations by Monte-Carlo that were published by Bush Jones. This algorithm requires only function values and can be applied readily to complicated systems of transcendental functions. The implementation of this sequential algorithm provides scientists with the means to utilize function analysis in mathematics or other fields of science. The algorithm, however, is so computationally intensive that the system is limited to a very small set of variables, and this will make it unfeasible for large systems of equations. Also a computational technique was needed for investigating a metrology of preventing the algorithm structure from converging to the same root along different paths of computation. The research provides techniques for improving the efficiency and correctness of the algorithm. The sequential algorithm for this technique was corrected and a parallel algorithm is presented. This parallel method has been formally analyzed and is compared with other known methods of root isolation. The effectiveness, efficiency, enhanced overall performance of the parallel processing of the program in comparison to sequential processing is discussed. The message passing model was used for this parallel processing, and it is presented and implemented on Intel/860 MIMD architecture. The parallel processing proposed in this research has been implemented in an ongoing high energy physics experiment: this algorithm has been used to track neutrinoes in a super K detector. This experiment is located in Japan, and data can be processed on-line or off-line locally or remotely.
Synthesizing genetic sequential logic circuit with clock pulse generator.
Chuang, Chia-Hua; Lin, Chun-Liang
2014-05-28
Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Testing New Programming Paradigms with NAS Parallel Benchmarks
NASA Technical Reports Server (NTRS)
Jin, H.; Frumkin, M.; Schultz, M.; Yan, J.
2000-01-01
Over the past decade, high performance computing has evolved rapidly, not only in hardware architectures but also with increasing complexity of real applications. Technologies have been developing to aim at scaling up to thousands of processors on both distributed and shared memory systems. Development of parallel programs on these computers is always a challenging task. Today, writing parallel programs with message passing (e.g. MPI) is the most popular way of achieving scalability and high performance. However, writing message passing programs is difficult and error prone. Recent years new effort has been made in defining new parallel programming paradigms. The best examples are: HPF (based on data parallelism) and OpenMP (based on shared memory parallelism). Both provide simple and clear extensions to sequential programs, thus greatly simplify the tedious tasks encountered in writing message passing programs. HPF is independent of memory hierarchy, however, due to the immaturity of compiler technology its performance is still questionable. Although use of parallel compiler directives is not new, OpenMP offers a portable solution in the shared-memory domain. Another important development involves the tremendous progress in the internet and its associated technology. Although still in its infancy, Java promisses portability in a heterogeneous environment and offers possibility to "compile once and run anywhere." In light of testing these new technologies, we implemented new parallel versions of the NAS Parallel Benchmarks (NPBs) with HPF and OpenMP directives, and extended the work with Java and Java-threads. The purpose of this study is to examine the effectiveness of alternative programming paradigms. NPBs consist of five kernels and three simulated applications that mimic the computation and data movement of large scale computational fluid dynamics (CFD) applications. We started with the serial version included in NPB2.3. Optimization of memory and cache usage was applied to several benchmarks, noticeably BT and SP, resulting in better sequential performance. In order to overcome the lack of an HPF performance model and guide the development of the HPF codes, we employed an empirical performance model for several primitives found in the benchmarks. We encountered a few limitations of HPF, such as lack of supporting the "REDISTRIBUTION" directive and no easy way to handle irregular computation. The parallelization with OpenMP directives was done at the outer-most loop level to achieve the largest granularity. The performance of six HPF and OpenMP benchmarks is compared with their MPI counterparts for the Class-A problem size in the figure in next page. These results were obtained on an SGI Origin2000 (195MHz) with MIPSpro-f77 compiler 7.2.1 for OpenMP and MPI codes and PGI pghpf-2.4.3 compiler with MPI interface for HPF programs.
Rapid Onboard Trajectory Design for Autonomous Spacecraft in Multibody Systems
NASA Astrophysics Data System (ADS)
Trumbauer, Eric Michael
This research develops automated, on-board trajectory planning algorithms in order to support current and new mission concepts. These include orbiter missions to Phobos or Deimos, Outer Planet Moon orbiters, and robotic and crewed missions to small bodies. The challenges stem from the limited on-board computing resources which restrict full trajectory optimization with guaranteed convergence in complex dynamical environments. The approach taken consists of leveraging pre-mission computations to create a large database of pre-computed orbits and arcs. Such a database is used to generate a discrete representation of the dynamics in the form of a directed graph, which acts to index these arcs. This allows the use of graph search algorithms on-board in order to provide good approximate solutions to the path planning problem. Coupled with robust differential correction and optimization techniques, this enables the determination of an efficient path between any boundary conditions with very little time and computing effort. Furthermore, the optimization methods developed here based on sequential convex programming are shown to have provable convergence properties, as well as generating feasible major iterates in case of a system interrupt -- a key requirement for on-board application. The outcome of this project is thus the development of an algorithmic framework which allows the deployment of this approach in a variety of specific mission contexts. Test cases related to missions of interest to NASA and JPL such as a Phobos orbiter and a Near Earth Asteroid interceptor are demonstrated, including the results of an implementation on the RAD750 flight processor. This method fills a gap in the toolbox being developed to create fully autonomous space exploration systems.
Programming Cell Adhesion for On-Chip Sequential Boolean Logic Functions.
Qu, Xiangmeng; Wang, Shaopeng; Ge, Zhilei; Wang, Jianbang; Yao, Guangbao; Li, Jiang; Zuo, Xiaolei; Shi, Jiye; Song, Shiping; Wang, Lihua; Li, Li; Pei, Hao; Fan, Chunhai
2017-08-02
Programmable remodelling of cell surfaces enables high-precision regulation of cell behavior. In this work, we developed in vitro constructed DNA-based chemical reaction networks (CRNs) to program on-chip cell adhesion. We found that the RGD-functionalized DNA CRNs are entirely noninvasive when interfaced with the fluidic mosaic membrane of living cells. DNA toehold with different lengths could tunably alter the release kinetics of cells, which shows rapid release in minutes with the use of a 6-base toehold. We further demonstrated the realization of Boolean logic functions by using DNA strand displacement reactions, which include multi-input and sequential cell logic gates (AND, OR, XOR, and AND-OR). This study provides a highly generic tool for self-organization of biological systems.
Hemodynamic analysis of sequential graft from right coronary system to left coronary system.
Wang, Wenxin; Mao, Boyan; Wang, Haoran; Geng, Xueying; Zhao, Xi; Zhang, Huixia; Xie, Jinsheng; Zhao, Zhou; Lian, Bo; Liu, Youjun
2016-12-28
Sequential and single grafting are two surgical procedures of coronary artery bypass grafting. However, it remains unclear if the sequential graft can be used between the right and left coronary artery system. The purpose of this paper is to clarify the possibility of right coronary artery system anastomosis to left coronary system. A patient-specific 3D model was first reconstructed based on coronary computed tomography angiography (CCTA) images. Two different grafts, the normal multi-graft (Model 1) and the novel multi-graft (Model 2), were then implemented on this patient-specific model using virtual surgery techniques. In Model 1, the single graft was anastomosed to right coronary artery (RCA) and the sequential graft was adopted to anastomose left anterior descending (LAD) and left circumflex artery (LCX). While in Model 2, the single graft was anastomosed to LAD and the sequential graft was adopted to anastomose RCA and LCX. A zero-dimensional/three-dimensional (0D/3D) coupling method was used to realize the multi-scale simulation of both the pre-operative and two post-operative models. Flow rates in the coronary artery and grafts were obtained. The hemodynamic parameters were also showed, including wall shear stress (WSS) and oscillatory shear index (OSI). The area of low WSS and OSI in Model 1 was much less than that in Model 2. Model 1 shows optimistic hemodynamic modifications which may enhance the long-term patency of grafts. The anterior segments of sequential graft have better long-term patency than the posterior segments. With rational spatial position of the heart vessels, the last anastomosis of sequential graft should be connected to the main branch.
Program For Parallel Discrete-Event Simulation
NASA Technical Reports Server (NTRS)
Beckman, Brian C.; Blume, Leo R.; Geiselman, John S.; Presley, Matthew T.; Wedel, John J., Jr.; Bellenot, Steven F.; Diloreto, Michael; Hontalas, Philip J.; Reiher, Peter L.; Weiland, Frederick P.
1991-01-01
User does not have to add any special logic to aid in synchronization. Time Warp Operating System (TWOS) computer program is special-purpose operating system designed to support parallel discrete-event simulation. Complete implementation of Time Warp mechanism. Supports only simulations and other computations designed for virtual time. Time Warp Simulator (TWSIM) subdirectory contains sequential simulation engine interface-compatible with TWOS. TWOS and TWSIM written in, and support simulations in, C programming language.
1990-03-01
knowledge covering problems of this type is called calculus of variations or optimal control theory (Refs. 1-8). As stated before, appli - cations occur...to the optimality conditions and the feasibility equations of Problem (GP), respectively. Clearly, after the transformation (26) is applied , the...trajectories, the primal sequential gradient-restoration algorithm (PSGRA) is applied to compute optimal trajectories for aeroassisted orbital transfer
ERIC Educational Resources Information Center
Heath, Steve M.; Hogben, John H.
2004-01-01
Background: Claims that children with reading and oral language deficits have impaired perception of sequential sounds are usually based on psychophysical measures of auditory temporal processing (ATP) designed to characterise group performance. If we are to use these measures (e.g., the Tallal, 1980, Repetition Test) as the basis for intervention…
ERIC Educational Resources Information Center
Ferry, Alissa L.; Fló, Ana; Brusini, Perrine; Cattarossi, Luigi; Macagno, Francesco; Nespor, Marina; Mehler, Jacques
2016-01-01
To understand language, humans must encode information from rapid, sequential streams of syllables--tracking their order and organizing them into words, phrases, and sentences. We used Near-Infrared Spectroscopy (NIRS) to determine whether human neonates are born with the capacity to track the positions of syllables in multisyllabic sequences.…
Mining sequential patterns for protein fold recognition.
Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I
2008-02-01
Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.
Lichtenhan, J T; Hartsock, J; Dornhoffer, J R; Donovan, K M; Salt, A N
2016-11-01
Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. Copyright © 2016 Elsevier B.V. All rights reserved.
Lichtenhan, JT; Hartsock, J; Dornhoffer, JR; Donovan, KM; Salt, AN
2016-01-01
Background Administering pharmaceuticals to the scala tympani of the inner ear is a common approach to study cochlear physiology and mechanics. We present here a novel method for in vivo drug delivery in a controlled manner to sealed ears. New method Injections of ototoxic solutions were applied from a pipette sealed into a fenestra in the cochlear apex, progressively driving solutions along the length of scala tympani toward the cochlear aqueduct at the base. Drugs can be delivered rapidly or slowly. In this report we focus on slow delivery in which the injection rate is automatically adjusted to account for varying cross sectional area of the scala tympani, therefore driving a solution front at uniform rate. Results Objective measurements originating from finely spaced, low- to high-characteristic cochlear frequency places were sequentially affected. Comparison with existing methods(s): Controlled administration of pharmaceuticals into the cochlear apex overcomes a number of serious limitations of previously established methods such as cochlear perfusions with an injection pipette in the cochlear base: The drug concentration achieved is more precisely controlled, drug concentrations remain in scala tympani and are not rapidly washed out by cerebrospinal fluid flow, and the entire length of the cochlear spiral can be treated quickly or slowly with time. Conclusions Controlled administration of solutions into the cochlear apex can be a powerful approach to sequentially effect objective measurements originating from finely spaced cochlear regions and allows, for the first time, the spatial origin of CAPs to be objectively defined. PMID:27506463
Lemons, B; Khaing, H; Ward, A; Thakur, P
2018-06-01
A new sequential separation method for the determination of polonium and actinides (Pu, Am and U) in drinking water samples has been developed that can be used for emergency response or routine water analyses. For the first time, the application of TEVA chromatography column in the sequential separation of polonium and plutonium has been studied. This method utilizes a rapid Fe +3 co-precipitation step to remove matrix interferences, followed by plutonium oxidation state adjustment to Pu 4+ and an incubation period of ~ 1 h at 50-60 °C to allow Po 2+ to oxidize to Po 4+ . The polonium and plutonium were then separated on a TEVA column, while separation of americium from uranium was performed on a TRU column. After separation, polonium was micro-precipitated with copper sulfide (CuS), while actinides were micro co-precipitated using neodymium fluoride (NdF 3 ) for counting by the alpha spectrometry. The method is simple, robust and can be performed quickly with excellent removal of interferences, high chemical recovery and very good alpha peak resolution. The efficiency and reliability of the procedures were tested by using spiked samples. The effect of several transition metals (Cu 2+ , Pb 2+ , Fe 3+ , Fe 2+ , and Ni 2+ ) on the performance of this method were also assessed to evaluate the potential matrix effects. Studies indicate that presence of up to 25 mg of these cations in the samples had no adverse effect on the recovery or the resolution of polonium alpha peaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Porting Gravitational Wave Signal Extraction to Parallel Virtual Machine (PVM)
NASA Technical Reports Server (NTRS)
Thirumalainambi, Rajkumar; Thompson, David E.; Redmon, Jeffery
2009-01-01
Laser Interferometer Space Antenna (LISA) is a planned NASA-ESA mission to be launched around 2012. The Gravitational Wave detection is fundamentally the determination of frequency, source parameters, and waveform amplitude derived in a specific order from the interferometric time-series of the rotating LISA spacecrafts. The LISA Science Team has developed a Mock LISA Data Challenge intended to promote the testing of complicated nested search algorithms to detect the 100-1 millihertz frequency signals at amplitudes of 10E-21. However, it has become clear that, sequential search of the parameters is very time consuming and ultra-sensitive; hence, a new strategy has been developed. Parallelization of existing sequential search algorithms of Gravitational Wave signal identification consists of decomposing sequential search loops, beginning with outermost loops and working inward. In this process, the main challenge is to detect interdependencies among loops and partitioning the loops so as to preserve concurrency. Existing parallel programs are based upon either shared memory or distributed memory paradigms. In PVM, master and node programs are used to execute parallelization and process spawning. The PVM can handle process management and process addressing schemes using a virtual machine configuration. The task scheduling and the messaging and signaling can be implemented efficiently for the LISA Gravitational Wave search process using a master and 6 nodes. This approach is accomplished using a server that is available at NASA Ames Research Center, and has been dedicated to the LISA Data Challenge Competition. Historically, gravitational wave and source identification parameters have taken around 7 days in this dedicated single thread Linux based server. Using PVM approach, the parameter extraction problem can be reduced to within a day. The low frequency computation and a proxy signal-to-noise ratio are calculated in separate nodes that are controlled by the master using message and vector of data passing. The message passing among nodes follows a pattern of synchronous and asynchronous send-and-receive protocols. The communication model and the message buffers are allocated dynamically to address rapid search of gravitational wave source information in the Mock LISA data sets.
Yoon, Sang-Young; Ko, Jeonghan; Jung, Myung-Chul
2016-07-01
The aim of study is to suggest a job rotation schedule by developing a mathematical model in order to reduce cumulative workload from the successive use of the same body region. Workload assessment using rapid entire body assessment (REBA) was performed for the model in three automotive assembly lines of chassis, trim, and finishing to identify which body part exposed to relatively high workloads at workstations. The workloads were incorporated to the model to develop a job rotation schedule. The proposed schedules prevent the exposure to high workloads successively on the same body region and minimized between-worker variance in cumulative daily workload. Whereas some of workers were successively assigned to high workload workstation under no job rotation and serial job rotation. This model would help to reduce the potential for work-related musculoskeletal disorders (WMSDs) without additional cost for engineering work, although it may need more computational time and relative complex job rotation sequences. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
A secure and efficiently searchable health information architecture.
Yasnoff, William A
2016-06-01
Patient-centric repositories of health records are an important component of health information infrastructure. However, patient information in a single repository is potentially vulnerable to loss of the entire dataset from a single unauthorized intrusion. A new health record storage architecture, the personal grid, eliminates this risk by separately storing and encrypting each person's record. The tradeoff for this improved security is that a personal grid repository must be sequentially searched since each record must be individually accessed and decrypted. To allow reasonable search times for large numbers of records, parallel processing with hundreds (or even thousands) of on-demand virtual servers (now available in cloud computing environments) is used. Estimated search times for a 10 million record personal grid using 500 servers vary from 7 to 33min depending on the complexity of the query. Since extremely rapid searching is not a critical requirement of health information infrastructure, the personal grid may provide a practical and useful alternative architecture that eliminates the large-scale security vulnerabilities of traditional databases by sacrificing unnecessary searching speed. Copyright © 2016 Elsevier Inc. All rights reserved.
Reengineering the project design process
NASA Astrophysics Data System (ADS)
Kane Casani, E.; Metzger, Robert M.
1995-01-01
In response to the National Aeronautics and Space Administration's goal of working faster, better, and cheaper, the Jet Propulsion Laboratory (JPL) has developed extensive plans to minimize cost, maximize customer and employee satisfaction, and implement small- and moderate-size missions. These plans include improved management structures and processes, enhanced technical design processes, the incorporation of new technology, and the development of more economical space- and ground-system designs. The Laboratory's new Flight Projects Implementation Development Office has been chartered to oversee these innovations and the reengineering of JPL's project design process, including establishment of the Project Design Center (PDC) and the Flight System Testbed (FST). Reengineering at JPL implies a cultural change whereby the character of the Laboratory's design process will change from sequential to concurrent and from hierarchical to parallel. The Project Design Center will support missions offering high science return, design to cost, demonstrations of new technology, and rapid development. Its computer-supported environment will foster high-fidelity project life-cycle development and more accurate cost estimating. These improvements signal JPL's commitment to meeting the challenges of space exploration in the next century.
ParticleCall: A particle filter for base calling in next-generation sequencing systems
2012-01-01
Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067
Millroth, Philip; Guath, Mona; Juslin, Peter
2018-06-07
The rationality of decision making under risk is of central concern in psychology and other behavioral sciences. In real-life, the information relevant to a decision often arrives sequentially or changes over time, implying nontrivial demands on memory. Yet, little is known about how this affects the ability to make rational decisions and a default assumption is rather that information about outcomes and probabilities are simultaneously available at the time of the decision. In 4 experiments, we show that participants receiving probability- and outcome information sequentially report substantially (29 to 83%) higher certainty equivalents than participants with simultaneous presentation. This holds also for monetary-incentivized participants with perfect recall of the information. Participants in the sequential conditions often violate stochastic dominance in the sense that they pay more for a lottery with low probability of an outcome than participants in the simultaneous condition pay for a high probability of the same outcome. Computational modeling demonstrates that Cumulative Prospect Theory (Tversky & Kahneman, 1992) fails to account for the effects of sequential presentation, but a model assuming anchoring-and adjustment constrained by memory can account for the data. By implication, established assumptions of rationality may need to be reconsidered to account for the effects of memory in many real-life tasks. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
In-situ sequential laser transfer and laser reduction of graphene oxide films
NASA Astrophysics Data System (ADS)
Papazoglou, S.; Petridis, C.; Kymakis, E.; Kennou, S.; Raptis, Y. S.; Chatzandroulis, S.; Zergioti, I.
2018-04-01
Achieving high quality transfer of graphene on selected substrates is a priority in device fabrication, especially where drop-on-demand applications are involved. In this work, we report an in-situ, fast, simple, and one step process that resulted in the reduction, transfer, and fabrication of reduced graphene oxide-based humidity sensors, using picosecond laser pulses. By tuning the laser illumination parameters, we managed to implement the sequential printing and reduction of graphene oxide flakes. The overall process lasted only a few seconds compared to a few hours that our group has previously published. DC current measurements, X-Ray Photoelectron Spectroscopy, X-Ray Diffraction, and Raman Spectroscopy were employed in order to assess the efficiency of our approach. To demonstrate the applicability and the potential of the technique, laser printed reduced graphene oxide humidity sensors with a limit of detection of 1700 ppm are presented. The results demonstrated in this work provide a selective, rapid, and low-cost approach for sequential transfer and photochemical reduction of graphene oxide micro-patterns onto various substrates for flexible electronics and sensor applications.
Approximations for Quantitative Feedback Theory Designs
NASA Technical Reports Server (NTRS)
Henderson, D. K.; Hess, R. A.
1997-01-01
The computational requirements for obtaining the results summarized in the preceding section were very modest and were easily accomplished using computer-aided control system design software. Of special significance is the ability of the PDT to indicate a loop closure sequence for MIMO QFT designs that employ sequential loop closure. Although discussed as part of a 2 x 2 design, the PDT is obviously applicable to designs with a greater number of inputs and system responses.
2010-10-01
bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction
Patterns and Practices for Future Architectures
2014-08-01
14. SUBJECT TERMS computing architecture, graph algorithms, high-performance computing, big data , GPU 15. NUMBER OF PAGES 44 16. PRICE CODE 17...at Vertex 1 6 Figure 4: Data Structures Created by Kernel 1 of Single CPU, List Implementation Using the Graph in the Example from Section 1.2 9...Figure 5: Kernel 2 of Graph500 BFS Reference Implementation: Single CPU, List 10 Figure 6: Data Structures for Sequential CSR Algorithm 12 Figure 7
Synthesizing genetic sequential logic circuit with clock pulse generator
2014-01-01
Background Rhythmic clock widely occurs in biological systems which controls several aspects of cell physiology. For the different cell types, it is supplied with various rhythmic frequencies. How to synthesize a specific clock signal is a preliminary but a necessary step to further development of a biological computer in the future. Results This paper presents a genetic sequential logic circuit with a clock pulse generator based on a synthesized genetic oscillator, which generates a consecutive clock signal whose frequency is an inverse integer multiple to that of the genetic oscillator. An analogous electronic waveform-shaping circuit is constructed by a series of genetic buffers to shape logic high/low levels of an oscillation input in a basic sinusoidal cycle and generate a pulse-width-modulated (PWM) output with various duty cycles. By controlling the threshold level of the genetic buffer, a genetic clock pulse signal with its frequency consistent to the genetic oscillator is synthesized. A synchronous genetic counter circuit based on the topology of the digital sequential logic circuit is triggered by the clock pulse to synthesize the clock signal with an inverse multiple frequency to the genetic oscillator. The function acts like a frequency divider in electronic circuits which plays a key role in the sequential logic circuit with specific operational frequency. Conclusions A cascaded genetic logic circuit generating clock pulse signals is proposed. Based on analogous implement of digital sequential logic circuits, genetic sequential logic circuits can be constructed by the proposed approach to generate various clock signals from an oscillation signal. PMID:24884665
Matsuura, Kaoru; Jin, Wei Wei; Liu, Hao; Matsumiya, Goro
2018-04-01
The objective of this study was to evaluate the haemodynamic patterns in each anastomosis fashion using a computational fluid dynamic study in a native coronary occlusion model. Fluid dynamic computations were carried out with ANSYS CFX (ANSYS Inc., Canonsburg, PA, USA) software. The incision lengths for parallel and diamond anastomoses were fixed at 2 mm. Native vessels were set to be totally occluded. The diameter of both the native and graft vessels was set to be 2 mm. The inlet boundary condition was set by a sample of the transient time flow measurement which was measured intraoperatively. The diamond anastomosis was observed to reduce flow to the native outlet and increase flow to the bypass outlet; the opposite was observed in the parallel anastomosis. Total energy efficiency was higher in the diamond anastomosis than the parallel anastomosis. Wall shear stress was higher in the diamond anastomosis than in the parallel anastomosis; it was the highest at the top of the outlet. A high oscillatory shear index was observed at the bypass inlet in the parallel anastomosis and at the native inlet in the diamond anastomosis. The diamond sequential anastomosis would be an effective option for multiple sequential bypasses because of the better flow to the bypass outlet than with the parallel anastomosis. However, flow competition should be kept in mind while using the diamond anastomosis for moderately stenotic vessels because of worsened flow to the native outlet. Care should be taken to ensure that the fluid dynamics patterns are optimal and prevent future native and bypass vessel disease progression.
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Effect of Variations in IRU Integration Time Interval On Accuracy of Aqua Attitude Estimation
NASA Technical Reports Server (NTRS)
Natanson, G. A.; Tracewell, Dave
2003-01-01
During Aqua launch support, attitude analysts noticed several anomalies in Onboard Computer (OBC) rates and in rates computed by the ground Attitude Determination System (ADS). These included: 1) periodic jumps in the OBC pitch rate every 2 minutes; 2) spikes in ADS pitch rate every 4 minutes; 3) close agreement between pitch rates computed by ADS and those derived from telemetered OBC quaternions (in contrast to the step-wise pattern observed for telemetered OBC rates); 4) spikes of +/- 10 milliseconds in telemetered IRU integration time every 4 minutes (despite the fact that telemetered time tags of any two sequential IRU measurements were always 1 second apart from each other). An analysis presented in the paper explains this anomalous behavior by a small average offset of about 0.5 +/- 0.05 microsec in the time interval between two sequential accumulated angle measurements. It is shown that errors in the estimated pitch angle due to neglecting the aforementioned variations in the integration time interval by the OBC is within +/- 2 arcseconds. Ground attitude solutions are found to be accurate enough to see the effect of the variations on the accuracy of the estimated pitch angle.
Comparison of Sequential and Variational Data Assimilation
NASA Astrophysics Data System (ADS)
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
New methods, algorithms, and software for rapid mapping of tree positions in coordinate forest plots
A. Dan Wilson
2000-01-01
The theories and methodologies for two new tree mapping methods, the Sequential-target method and the Plot-origin radial method, are described. The methods accommodate the use of any conventional distance measuring device and compass to collect horizontal distance and azimuth data between source or reference positions (origins) and target trees. Conversion equations...
Analysis of Dibenzothiophene Desulfurization in a Recombinant Pseudomonas putida Strain▿
Calzada, Javier; Zamarro, María T.; Alcón, Almudena; Santos, Victoria E.; Díaz, Eduardo; García, José L.; Garcia-Ochoa, Felix
2009-01-01
Biodesulfurization was monitored in a recombinant Pseudomonas putida CECT5279 strain. DszB desulfinase activity reached a sharp maximum at the early exponential phase, but it rapidly decreased at later growth phases. A model two-step resting-cell process combining sequentially P. putida cells from the late and early exponential growth phases was designed to significantly increase biodesulfurization. PMID:19047400
A Markov-Based Recommendation Model for Exploring the Transfer of Learning on the Web
ERIC Educational Resources Information Center
Huang, Yueh-Min; Huang, Tien-Chi; Wang, Kun-Te; Hwang, Wu-Yuin
2009-01-01
The ability to apply existing knowledge in new situations and settings is clearly a vital skill that all students need to develop. Nowhere is this truer than in the rapidly developing world of Web-based learning, which is characterized by non-sequential courses and the absence of an effective cross-subject guidance system. As a result, questions…
Context-dependent decision-making: a simple Bayesian model
Lloyd, Kevin; Leslie, David S.
2013-01-01
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101
Context-dependent decision-making: a simple Bayesian model.
Lloyd, Kevin; Leslie, David S
2013-05-06
Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
CT fluoroscopy-assisted puncture of thoracic and abdominal masses: a randomized trial.
Kirchner, Johannes; Kickuth, Ralph; Laufer, Ulf; Schilling, Esther Maria; Adams, Stephan; Liermann, Dieter
2002-03-01
We investigated the benefit of real-time guidance of interventional punctures by means of computed tomography fluoroscopy (CTF) compared with the conventional sequential acquisition guidance. In a prospective randomized trial, 75 patients underwent either CTF-guided (group A, n = 50) or sequential CT-guided (group B, n = 25) punctures of thoracic (n = 29) or abdominal (n = 46) masses. CTF was performed on the CT machine (Somatom Plus 4 Power, Siemens Corp., Forchheim, Germany) equipped with the C.A.R.E. Vision application (tube voltage 120 kV, tube current 50 mA, rotational time 0.75 s, slice thickness 10 mm, 8 frames/s). The average procedure time showed a statistically significant difference between the two study groups (group A: 564 s, group B 795 s, P = 0.0032). The mean total mAs was 7089 mAs for the CTF and 4856 mAs for the sequential image-guided intervention, respectively. The sensitivity was 71% specificity 100% positive predictive value 100% and negative predictive value 60% for the CTF-guided puncture, and 68, 100, 100 and 50% for sequential CT, respectively. CTF guidance realizes a time-saving but increases the radiation exposure dosage.
"Application of Tunable Diode Laser Spectrometry to Isotopic Studies for Exobiology"
NASA Technical Reports Server (NTRS)
Sauke, Todd B.
1999-01-01
Computer-controlled electrically-activated valves for rapid gas-handling have been incorporated into the Stable Isotope Laser Spectrometer (SILS) which now permits rapid filling and evacuating of the sample and reference gas cells, Experimental protocols have been developed to take advantage of the fast gas handling capabilities of the instrument and to achieve increased accuracy which results from reduced instrumental drift during rapid isotopic ratio measurements. Using these protocols' accuracies of 0.5 del (0.05%) have been achieved in measurements of 13C/12C in carbon dioxide. Using the small stable isotope laser spectrometer developed in a related PIDDP project of the Co-I, protocols for acquisition of rapid sequential calibration spectra were developed which resulted in 0.5 del accuracy also being achieved in this less complex instrument. An initial version of software for automatic characterization of tunable diode lasers has been developed and diodes have been characterized in order to establish their spectral output properties. A new state-of-the-art high operating temperature (200 K) mid infrared diode laser was purchased (through NASA procurement) and characterized. A thermo-electrically cooled mid infrared tunable diode laser system for use with high temperature operation lasers was developed. In addition to isotopic ratio measurements of carbon and oxygen, measurements of a third biologically important element (15N/14N in N2O gas) have been achieved to a preliminary accuracy of about 0.2%. Transfer of the basic SILS technology to the commercial sector is proceeding under an unfunded Space Act Agreement between NASA and SpiraMed, a medical diagnostic instrument company. Two patents have been issued. Foreign patents based on these two US patents have been applied for and are expected to be issued. A preliminary design was developed for a thermo-electrically cooled SILS instruments for application to planetary space flight exploration missions.
Protocol Analysis as a Tool in Function and Task Analysis
1999-10-01
Autocontingency The use of log-linear and logistic regression methods to analyse sequential data seems appealing , and is strongly advocated by...collection and analysis of observational data. Behavior Research Methods, Instruments, and Computers, 23(3), 415-429. Patrick, J. D. (1991). Snob : A
ERIC Educational Resources Information Center
LEBEDEV, P.D.
ON THE PREMISES THAT THE DEVELOPMENT OF PROGRAMED LEARNING BY RESEARCH TEAMS OF SUBJECT AND TECHNIQUE SPECIALISTS IS INDISPUTABLE, AND THAT THE EXPERIENCED TEACHER IN THE ROLE OF INDIVIDUAL TUTOR IS INDISPENSABLE, THE TECHNOLOGY TO SUPPORT PROGRAMED INSTRUCTION MUST BE ADVANCED. AUTOMATED DEVICES EMPLOYING SEQUENTIAL AND BRANCHING TECHNIQUES FOR…
Simultaneous or sequential exposure to multiple chemicals may cause interactions in the pharmacokinetics (PK) and/or pharmacodynamics (PD) of the individual chemicals. Such interactions can cause modification of the internal or target dose/response of one chemical in the mixture ...
Hennes, M; Schuler, V; Weng, X; Buchwald, J; Demaille, D; Zheng, Y; Vidal, F
2018-04-26
We employ kinetic Monte-Carlo simulations to study the growth process of metal-oxide nanocomposites obtained via sequential pulsed laser deposition. Using Ni-SrTiO3 (Ni-STO) as a model system, we reduce the complexity of the computational problem by choosing a coarse-grained approach mapping Sr, Ti and O atoms onto a single effective STO pseudo-atom species. With this ansatz, we scrutinize the kinetics of the sequential synthesis process, governed by alternating deposition and relaxation steps, and analyze the self-organization propensity of Ni atoms into straight vertically aligned nanowires embedded in the surrounding STO matrix. We finally compare the predictions of our binary toy model with experiments and demonstrate that our computational approach captures fundamental aspects of self-assembled nanowire synthesis. Despite its simplicity, our modeling strategy successfully describes the impact of relevant parameters like the concentration or laser frequency on the final nanoarchitecture of metal-oxide thin films grown via pulsed laser deposition.
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
NASA Technical Reports Server (NTRS)
Charlesworth, Arthur
1990-01-01
The nondeterministic divide partitions a vector into two non-empty slices by allowing the point of division to be chosen nondeterministically. Support for high-level divide-and-conquer programming provided by the nondeterministic divide is investigated. A diva algorithm is a recursive divide-and-conquer sequential algorithm on one or more vectors of the same range, whose division point for a new pair of recursive calls is chosen nondeterministically before any computation is performed and whose recursive calls are made immediately after the choice of division point; also, access to vector components is only permitted during activations in which the vector parameters have unit length. The notion of diva algorithm is formulated precisely as a diva call, a restricted call on a sequential procedure. Diva calls are proven to be intimately related to associativity. Numerous applications of diva calls are given and strategies are described for translating a diva call into code for a variety of parallel computers. Thus diva algorithms separate logical correctness concerns from implementation concerns.
Chaisangmongkon, Warasinee; Swaminathan, Sruthi K.; Freedman, David J.; Wang, Xiao-Jing
2017-01-01
Summary Decision making involves dynamic interplay between internal judgements and external perception, which has been investigated in delayed match-to-category (DMC) experiments. Our analysis of neural recordings shows that, during DMC tasks, LIP and PFC neurons demonstrate mixed, time-varying, and heterogeneous selectivity, but previous theoretical work has not established the link between these neural characteristics and population-level computations. We trained a recurrent network model to perform DMC tasks and found that the model can remarkably reproduce key features of neuronal selectivity at the single-neuron and population levels. Analysis of the trained networks elucidates that robust transient trajectories of the neural population are the key driver of sequential categorical decisions. The directions of trajectories are governed by network self-organized connectivity, defining a ‘neural landscape’, consisting of a task-tailored arrangement of slow states and dynamical tunnels. With this model, we can identify functionally-relevant circuit motifs and generalize the framework to solve other categorization tasks. PMID:28334612
Concurrent computation of attribute filters on shared memory parallel machines.
Wilkinson, Michael H F; Gao, Hui; Hesselink, Wim H; Jonker, Jan-Eppo; Meijster, Arnold
2008-10-01
Morphological attribute filters have not previously been parallelized, mainly because they are both global and non-separable. We propose a parallel algorithm that achieves efficient parallelism for a large class of attribute filters, including attribute openings, closings, thinnings and thickenings, based on Salembier's Max-Trees and Min-trees. The image or volume is first partitioned in multiple slices. We then compute the Max-trees of each slice using any sequential Max-Tree algorithm. Subsequently, the Max-trees of the slices can be merged to obtain the Max-tree of the image. A C-implementation yielded good speed-ups on both a 16-processor MIPS 14000 parallel machine, and a dual-core Opteron-based machine. It is shown that the speed-up of the parallel algorithm is a direct measure of the gain with respect to the sequential algorithm used. Furthermore, the concurrent algorithm shows a speed gain of up to 72 percent on a single-core processor, due to reduced cache thrashing.
Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang
2017-09-19
A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.
De Britto, R L; Vanamail, P; Sankari, T; Vijayalakshmi, G; Das, L K; Pani, S P
2015-06-01
Till today, there is no effective treatment protocol for the complete clearance of Wuchereria bancrofti (W.b) infection that causes secondary lymphoedema. In a double blind randomized control trial (RCT), 146 asymptomatic W. b infected individuals were randomly assigned to one of the four regimens for 12 days, DEC 300 mg + Doxycycline 100 mg coadministration or DEC 300 mg + Albendazole 400 mg co-administration or DEC 300 mg + Albendazole 400 mg sequential administration or control regimen DEC 300 mg and were followed up at 13, 26 and 52 weeks post-treatment for the clearance of infection. At intake, there was no significant variation in mf counts (F(3,137)=0.044; P=0.988) and antigen levels (F(3,137)=1.433; P=0.236) between the regimens. Primary outcome analysis showed that DEC + Albendazole sequential administration has an enhanced efficacy over DEC + Albendazole co-administration (80.6 Vs 64.7%), and this regimen is significantly different when compared to DEC + doxycycline co-administration and control (P<0.05), in clearing microfilaria in 13 weeks. Secondary outcome analysis showed that, all the trial regimens were comparable to control regimen in clearing antigen (F(3, 109)=0.405; P=0.750). Therefore, DEC + Albendazole sequential administration appears to be a better option for rapid clearance of W. b microfilariae in 13 weeks time. (Clinical trials.gov identifier - NCT02005653).
NASA Astrophysics Data System (ADS)
Qin, Cheng-Zhi; Zhan, Lijun
2012-06-01
As one of the important tasks in digital terrain analysis, the calculation of flow accumulations from gridded digital elevation models (DEMs) usually involves two steps in a real application: (1) using an iterative DEM preprocessing algorithm to remove the depressions and flat areas commonly contained in real DEMs, and (2) using a recursive flow-direction algorithm to calculate the flow accumulation for every cell in the DEM. Because both algorithms are computationally intensive, quick calculation of the flow accumulations from a DEM (especially for a large area) presents a practical challenge to personal computer (PC) users. In recent years, rapid increases in hardware capacity of the graphics processing units (GPUs) provided in modern PCs have made it possible to meet this challenge in a PC environment. Parallel computing on GPUs using a compute-unified-device-architecture (CUDA) programming model has been explored to speed up the execution of the single-flow-direction algorithm (SFD). However, the parallel implementation on a GPU of the multiple-flow-direction (MFD) algorithm, which generally performs better than the SFD algorithm, has not been reported. Moreover, GPU-based parallelization of the DEM preprocessing step in the flow-accumulation calculations has not been addressed. This paper proposes a parallel approach to calculate flow accumulations (including both iterative DEM preprocessing and a recursive MFD algorithm) on a CUDA-compatible GPU. For the parallelization of an MFD algorithm (MFD-md), two different parallelization strategies using a GPU are explored. The first parallelization strategy, which has been used in the existing parallel SFD algorithm on GPU, has the problem of computing redundancy. Therefore, we designed a parallelization strategy based on graph theory. The application results show that the proposed parallel approach to calculate flow accumulations on a GPU performs much faster than either sequential algorithms or other parallel GPU-based algorithms based on existing parallelization strategies.
A stochastic method for computing hadronic matrix elements
Alexandrou, Constantia; Constantinou, Martha; Dinter, Simon; ...
2014-01-24
In this study, we present a stochastic method for the calculation of baryon 3-point functions which is an alternative to the typically used sequential method offering more versatility. We analyze the scaling of the error of the stochastically evaluated 3-point function with the lattice volume and find a favorable signal to noise ratio suggesting that the stochastic method can be extended to large volumes providing an efficient approach to compute hadronic matrix elements and form factors.
Accelerating Sequential Gaussian Simulation with a constant path
NASA Astrophysics Data System (ADS)
Nussbaumer, Raphaël; Mariethoz, Grégoire; Gravey, Mathieu; Gloaguen, Erwan; Holliger, Klaus
2018-03-01
Sequential Gaussian Simulation (SGS) is a stochastic simulation technique commonly employed for generating realizations of Gaussian random fields. Arguably, the main limitation of this technique is the high computational cost associated with determining the kriging weights. This problem is compounded by the fact that often many realizations are required to allow for an adequate uncertainty assessment. A seemingly simple way to address this problem is to keep the same simulation path for all realizations. This results in identical neighbourhood configurations and hence the kriging weights only need to be determined once and can then be re-used in all subsequent realizations. This approach is generally not recommended because it is expected to result in correlation between the realizations. Here, we challenge this common preconception and make the case for the use of a constant path approach in SGS by systematically evaluating the associated benefits and limitations. We present a detailed implementation, particularly regarding parallelization and memory requirements. Extensive numerical tests demonstrate that using a constant path allows for substantial computational gains with very limited loss of simulation accuracy. This is especially the case for a constant multi-grid path. The computational savings can be used to increase the neighbourhood size, thus allowing for a better reproduction of the spatial statistics. The outcome of this study is a recommendation for an optimal implementation of SGS that maximizes accurate reproduction of the covariance structure as well as computational efficiency.
PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides.
Islam, S M Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J
2015-07-05
Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination of toxicity and stability make these peptides remarkably valuable for their potential as bio-insecticides, antimicrobial peptides and peptide drug candidates. However, the wide sequence variation, sources and modalities of group members impose serious limitations on our ability to rapidly identify potential members. As a result, there is a need for automated high-throughput member classification approaches that leverage their demonstrated tertiary and functional homology. We developed an SVM-based model to predict sequential tri-disulfide peptide (STP) toxins from peptide sequences. One optimized model, called PredSTP, predicted STPs from training set with sensitivity, specificity, precision, accuracy and a Matthews correlation coefficient of 94.86%, 94.11%, 84.31%, 94.30% and 0.86, respectively, using 200 fold cross validation. The same model outperforms existing prediction approaches in three independent out of sample testsets derived from PDB. PredSTP can accurately identify a wide range of cystine stabilized peptide toxins directly from sequences in a species-agnostic fashion. The ability to rapidly filter sequences for potential bioactive peptides can greatly compress the time between peptide identification and testing structural and functional properties for possible antimicrobial and insecticidal candidates. A web interface is freely available to predict STP toxins from http://crick.ecs.baylor.edu/.
Exposure Control Using Adaptive Multi-Stage Item Bundles.
ERIC Educational Resources Information Center
Luecht, Richard M.
This paper presents a multistage adaptive testing test development paradigm that promises to handle content balancing and other test development needs, psychometric reliability concerns, and item exposure. The bundled multistage adaptive testing (BMAT) framework is a modification of the computer-adaptive sequential testing framework introduced by…
NASA Technical Reports Server (NTRS)
Rudy, D. H.; Morris, D. J.; Blanchard, D. K.; Cooke, C. H.; Rubin, S. G.
1975-01-01
The status of an investigation of four numerical techniques for the time-dependent compressible Navier-Stokes equations is presented. Results for free shear layer calculations in the Reynolds number range from 1000 to 81000 indicate that a sequential alternating-direction implicit (ADI) finite-difference procedure requires longer computing times to reach steady state than a low-storage hopscotch finite-difference procedure. A finite-element method with cubic approximating functions was found to require excessive computer storage and computation times. A fourth method, an alternating-direction cubic spline technique which is still being tested, is also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rebay, S.
This work is devoted to the description of an efficient unstructured mesh generation method entirely based on the Delaunay triangulation. The distinctive characteristic of the proposed method is that point positions and connections are computed simultaneously. This result is achieved by taking advantage of the sequential way in which the Bowyer-Watson algorithm computes the Delaunay triangulation. Two methods are proposed which have great geometrical flexibility, in that they allow us to treat domains of arbitrary shape and topology and to generate arbitrarily nonuniform meshes. The methods are computationally efficient and are applicable both in two and three dimensions. 11 refs.,more » 20 figs., 1 tab.« less
Habitual control of goal selection in humans
Cushman, Fiery; Morris, Adam
2015-01-01
Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050
Interfacing Computer Aided Parallelization and Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)
2003-01-01
When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.
ERIC Educational Resources Information Center
Nyasulu, Frazier; Moehring, Michael; Arthasery, Phyllis; Barlag, Rebecca
2011-01-01
The acid ionization constant, K[subscript a], of acetic acid and the base ionization constant, K[subscript b], of ammonia are determined easily and rapidly using a datalogger, a pH sensor, and a conductivity sensor. To decrease sample preparation time and to minimize waste, sequential aliquots of a concentrated standard are added to a known volume…
X.M. Zoua; H.H. Ruanc; Y. Fua; X.D. Yanga; L.Q. Sha
2005-01-01
Labile carbon is the fraction of soil organic carbon with most rapid turnover times and its oxidation drives the flux of CO2 between soils and atmosphere. Available chemical and physical fractionation methods for estimating soil labile organic carbon are indirect and lack a clear biological definition. We have modified the well-established Jenkinson and Powlsonâs...
Bedell, T Aaron; Hone, Graham A B; Valette, Damien; Yu, Jin-Quan; Davies, Huw M L; Sorensen, Erik J
2016-07-11
Methods for functionalizing carbon-hydrogen bonds are featured in a new synthesis of the tricyclic core architecture that characterizes the indoxamycin family of secondary metabolites. A unique collaboration between three laboratories has engendered a design for synthesis featuring two sequential C-H functionalization reactions, namely a diastereoselective dirhodium carbene insertion followed by an ester-directed oxidative Heck cyclization, to rapidly assemble the congested tricyclic core of the indoxamycins. This project exemplifies how multi-laboratory collaborations can foster conceptually novel approaches to challenging problems in chemical synthesis. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
de Almeida, Araci Malagodi; Ozawa, Terumi Okada; Alves, Arthur César de Medeiros; Janson, Guilherme; Lauris, José Roberto Pereira; Ioshida, Marilia Sayako Yatabe; Garib, Daniela Gamba
2017-06-01
The purpose of this "two-arm parallel" trial was to compare the orthopedic, dental, and alveolar bone plate changes of slow (SME) and rapid (RME) maxillary expansions in patients with complete bilateral cleft lip and palate (BCLP). Forty-six patients with BCLP and maxillary arch constriction in the late mixed dentition were randomly and equally allocated into two groups. Computer-generated randomization was used. Allocation was concealed with sequentially, numbered, sealed, opaque envelopes. The SME and RME groups comprised patients treated with quad-helix and Haas/Hyrax-type expanders, respectively. Cone-beam computed tomography (CBCT) exams were performed before expansion and 4 to 6 months post-expansion. Nasal cavity width, maxillary width, alveolar crest width, arch width, palatal cleft width, inclination of posterior teeth, alveolar crest level, and buccal and lingual bone plate thickness were assessed. Blinding was applicable for outcome assessment only. Interphase and intergroup comparisons were performed using paired t tests and t tests, respectively (p < 0.05). SME and RME similarly promoted significant increase in all the maxillary transverse dimensions at molar and premolar regions with a decreasing expanding effect from the dental arch to the nasal cavity. Palatal cleft width had a significant increase in both groups. Significant buccal inclination of posterior teeth was only observed for RME. Additionally, both expansion procedures promoted a slight reduction of the alveolar crest level and the buccal bone plate thickness. No difference was found between the orthopedic, dental, and alveolar bone plate changes of SME and RME in children with BCLP. Both appliances produced significant skeletal transverse gains with negligible periodontal bone changes. Treatment time for SME, however, was longer than the observed for RME. SME and RME can be similarly indicated to correct maxillary arch constriction in patients with BCLP in the mixed dentition.
Efficient Controls for Finitely Convergent Sequential Algorithms
Chen, Wei; Herman, Gabor T.
2010-01-01
Finding a feasible point that satisfies a set of constraints is a common task in scientific computing: examples are the linear feasibility problem and the convex feasibility problem. Finitely convergent sequential algorithms can be used for solving such problems; an example of such an algorithm is ART3, which is defined in such a way that its control is cyclic in the sense that during its execution it repeatedly cycles through the given constraints. Previously we found a variant of ART3 whose control is no longer cyclic, but which is still finitely convergent and in practice it usually converges faster than ART3 does. In this paper we propose a general methodology for automatic transformation of finitely convergent sequential algorithms in such a way that (i) finite convergence is retained and (ii) the speed of convergence is improved. The first of these two properties is proven by mathematical theorems, the second is illustrated by applying the algorithms to a practical problem. PMID:20953327
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Identifying protein complexes in PPI network using non-cooperative sequential game.
Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta
2017-08-21
Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Liu, Zhao; Zhu, Yunhong; Wu, Chenxue
2016-01-01
Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502
Method of Real-Time Principal-Component Analysis
NASA Technical Reports Server (NTRS)
Duong, Tuan; Duong, Vu
2005-01-01
Dominant-element-based gradient descent and dynamic initial learning rate (DOGEDYN) is a method of sequential principal-component analysis (PCA) that is well suited for such applications as data compression and extraction of features from sets of data. In comparison with a prior method of gradient-descent-based sequential PCA, this method offers a greater rate of learning convergence. Like the prior method, DOGEDYN can be implemented in software. However, the main advantage of DOGEDYN over the prior method lies in the facts that it requires less computation and can be implemented in simpler hardware. It should be possible to implement DOGEDYN in compact, low-power, very-large-scale integrated (VLSI) circuitry that could process data in real time.
Constrained multiple indicator kriging using sequential quadratic programming
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Erhan Tercan, A.
2012-11-01
Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Comparative Implementation of High Performance Computing for Power System Dynamic Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng
Dynamic simulation for transient stability assessment is one of the most important, but intensive, computations for power system planning and operation. Present commercial software is mainly designed for sequential computation to run a single simulation, which is very time consuming with a single processer. The application of High Performance Computing (HPC) to dynamic simulations is very promising in accelerating the computing process by parallelizing its kernel algorithms while maintaining the same level of computation accuracy. This paper describes the comparative implementation of four parallel dynamic simulation schemes in two state-of-the-art HPC environments: Message Passing Interface (MPI) and Open Multi-Processing (OpenMP).more » These implementations serve to match the application with dedicated multi-processor computing hardware and maximize the utilization and benefits of HPC during the development process.« less
Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
New Testing Methods to Assess Technical Problem-Solving Ability.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
Tests to assess problem-solving ability being provided for the Air Force are described, and some details on the development and validation of these computer-administered diagnostic achievement tests are discussed. Three measurement approaches were employed: (1) sequential problem solving; (2) context-free assessment of fundamental skills and…
Learning in Reverse: Eight-Month-Old Infants Track Backward Transitional Probabilities
ERIC Educational Resources Information Center
Pelucchi, Bruna; Hay, Jessica F.; Saffran, Jenny R.
2009-01-01
Numerous recent studies suggest that human learners, including both infants and adults, readily track sequential statistics computed between adjacent elements. One such statistic, transitional probability, is typically calculated as the likelihood that one element predicts another. However, little is known about whether listeners are sensitive to…
The Use of Tailored Testing with Instructional Programs. Final Report.
ERIC Educational Resources Information Center
Reckase, Mark D.
A computerized testing system was implemented in conjunction with the Radar Technician Training Course at the Naval Training Center, Great Lakes, Illinois. The feasibility of the system and students' attitudes toward it were examined. The system, a multilevel, microprocessor-based computer network, administered tests in a sequential, fixed length…
A Computational Model of Event Segmentation from Perceptual Prediction
ERIC Educational Resources Information Center
Reynolds, Jeremy R.; Zacks, Jeffrey M.; Braver, Todd S.
2007-01-01
People tend to perceive ongoing continuous activity as series of discrete events. This partitioning of continuous activity may occur, in part, because events correspond to dynamic patterns that have recurred across different contexts. Recurring patterns may lead to reliable sequential dependencies in observers' experiences, which then can be used…
Computers for symbolic processing
NASA Technical Reports Server (NTRS)
Wah, Benjamin W.; Lowrie, Matthew B.; Li, Guo-Jie
1989-01-01
A detailed survey on the motivations, design, applications, current status, and limitations of computers designed for symbolic processing is provided. Symbolic processing computations are performed at the word, relation, or meaning levels, and the knowledge used in symbolic applications may be fuzzy, uncertain, indeterminate, and ill represented. Various techniques for knowledge representation and processing are discussed from both the designers' and users' points of view. The design and choice of a suitable language for symbolic processing and the mapping of applications into a software architecture are then considered. The process of refining the application requirements into hardware and software architectures is treated, and state-of-the-art sequential and parallel computers designed for symbolic processing are discussed.
Chess games: a model for RNA based computation.
Cukras, A R; Faulhammer, D; Lipton, R J; Landweber, L F
1999-10-01
Here we develop the theory of RNA computing and a method for solving the 'knight problem' as an instance of a satisfiability (SAT) problem. Using only biological molecules and enzymes as tools, we developed an algorithm for solving the knight problem (3 x 3 chess board) using a 10-bit combinatorial pool and sequential RNase H digestions. The results of preliminary experiments presented here reveal that the protocol recovers far more correct solutions than expected at random, but the persistence of errors still presents the greatest challenge.
The cost and cost-effectiveness of rapid testing strategies for yaws diagnosis and surveillance.
Fitzpatrick, Christopher; Asiedu, Kingsley; Sands, Anita; Gonzalez Pena, Tita; Marks, Michael; Mitja, Oriol; Meheus, Filip; Van der Stuyft, Patrick
2017-10-01
Yaws is a non-venereal treponemal infection caused by Treponema pallidum subspecies pertenue. The disease is targeted by WHO for eradication by 2020. Rapid diagnostic tests (RDTs) are envisaged for confirmation of clinical cases during treatment campaigns and for certification of the interruption of transmission. Yaws testing requires both treponemal (trep) and non-treponemal (non-trep) assays for diagnosis of current infection. We evaluate a sequential testing strategy (using a treponemal RDT before a trep/non-trep RDT) in terms of cost and cost-effectiveness, relative to a single-assay combined testing strategy (using the trep/non-trep RDT alone), for two use cases: individual diagnosis and community surveillance. We use cohort decision analysis to examine the diagnostic and cost outcomes. We estimate cost and cost-effectiveness of the alternative testing strategies at different levels of prevalence of past/current infection and current infection under each use case. We take the perspective of the global yaws eradication programme. We calculate the total number of correct diagnoses for each strategy over a range of plausible prevalences. We employ probabilistic sensitivity analysis (PSA) to account for uncertainty and report 95% intervals. At current prices of the treponemal and trep/non-trep RDTs, the sequential strategy is cost-saving for individual diagnosis at prevalence of past/current infection less than 85% (81-90); it is cost-saving for surveillance at less than 100%. The threshold price of the trep/non-trep RDT (below which the sequential strategy would no longer be cost-saving) is US$ 1.08 (1.02-1.14) for individual diagnosis at high prevalence of past/current infection (51%) and US$ 0.54 (0.52-0.56) for community surveillance at low prevalence (15%). We find that the sequential strategy is cost-saving for both diagnosis and surveillance in most relevant settings. In the absence of evidence assessing relative performance (sensitivity and specificity), cost-effectiveness is uncertain. However, the conditions under which the combined test only strategy might be more cost-effective than the sequential strategy are limited. A cheaper trep/non-trep RDT is needed, costing no more than US$ 0.50-1.00, depending on the use case. Our results will help enhance the cost-effectiveness of yaws programmes in the 13 countries known to be currently endemic. It will also inform efforts in the much larger group of 71 countries with a history of yaws, many of which will have to undertake surveillance to confirm the interruption of transmission.
NASA Technical Reports Server (NTRS)
Sankararaman, Shankar
2016-01-01
This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.
Liu, Rong
2017-01-01
Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781
Acceleration of discrete stochastic biochemical simulation using GPGPU.
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130.
Acceleration of discrete stochastic biochemical simulation using GPGPU
Sumiyoshi, Kei; Hirata, Kazuki; Hiroi, Noriko; Funahashi, Akira
2015-01-01
For systems made up of a small number of molecules, such as a biochemical network in a single cell, a simulation requires a stochastic approach, instead of a deterministic approach. The stochastic simulation algorithm (SSA) simulates the stochastic behavior of a spatially homogeneous system. Since stochastic approaches produce different results each time they are used, multiple runs are required in order to obtain statistical results; this results in a large computational cost. We have implemented a parallel method for using SSA to simulate a stochastic model; the method uses a graphics processing unit (GPU), which enables multiple realizations at the same time, and thus reduces the computational time and cost. During the simulation, for the purpose of analysis, each time course is recorded at each time step. A straightforward implementation of this method on a GPU is about 16 times faster than a sequential simulation on a CPU with hybrid parallelization; each of the multiple simulations is run simultaneously, and the computational tasks within each simulation are parallelized. We also implemented an improvement to the memory access and reduced the memory footprint, in order to optimize the computations on the GPU. We also implemented an asynchronous data transfer scheme to accelerate the time course recording function. To analyze the acceleration of our implementation on various sizes of model, we performed SSA simulations on different model sizes and compared these computation times to those for sequential simulations with a CPU. When used with the improved time course recording function, our method was shown to accelerate the SSA simulation by a factor of up to 130. PMID:25762936
Bao, Penghui; Wu, Qi-Jia; Yin, Ping; Jiang, Yanfei; Wang, Xu; Xie, Mao-Hua; Sun, Tao; Huang, Lin; Mo, Ding-Ding; Zhang, Yi
2008-01-01
Self-splicing of group I introns is accomplished by two sequential ester-transfer reactions mediated by sequential binding of two different guanosine ligands, but it is yet unclear how the binding is coordinated at a single G-binding site. Using a three-piece trans-splicing system derived from the Candida intron, we studied the effect of the prior GTP binding on the later ωG binding by assaying the ribozyme activity in the second reaction. We showed that adding GTP simultaneously with and prior to the esterified ωG in a substrate strongly accelerated the second reaction, suggesting that the early binding of GTP facilitates the subsequent binding of ωG. GTP-mediated facilitation requires C2 amino and C6 carbonyl groups on the Watson–Crick edge of the base but not the phosphate or sugar groups, suggesting that the base triple interactions between GTP and the binding site are important for the subsequent ωG binding. Strikingly, GTP binding loosens a few local structures of the ribozyme including that adjacent to the base triple, providing structural basis for a rapid exchange of ωG for bound GTP. PMID:18978026
Domingues, Carla Magda Allan S; de Fátima Pereira, Sirlene; Cunha Marreiros, Ana Carolina; Menezes, Nair; Flannery, Brendan
2014-11-01
In August 2012, the Brazilian Ministry of Health introduced inactivated polio vaccine (IPV) as part of sequential polio vaccination schedule for all infants beginning their primary vaccination series. The revised childhood immunization schedule included 2 doses of IPV at 2 and 4 months of age followed by 2 doses of oral polio vaccine (OPV) at 6 and 15 months of age. One annual national polio immunization day was maintained to provide OPV to all children aged 6 to 59 months. The decision to introduce IPV was based on preventing rare cases of vaccine-associated paralytic polio, financially sustaining IPV introduction, ensuring equitable access to IPV, and preparing for future OPV cessation following global eradication. Introducing IPV during a national multivaccination campaign led to rapid uptake, despite challenges with local vaccine supply due to high wastage rates. Continuous monitoring is required to achieve high coverage with the sequential polio vaccine schedule. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
NASA Astrophysics Data System (ADS)
Liou, Jyun-you; Smith, Elliot H.; Bateman, Lisa M.; McKhann, Guy M., II; Goodman, Robert R.; Greger, Bradley; Davis, Tyler S.; Kellis, Spencer S.; House, Paul A.; Schevon, Catherine A.
2017-08-01
Objective. Epileptiform discharges, an electrophysiological hallmark of seizures, can propagate across cortical tissue in a manner similar to traveling waves. Recent work has focused attention on the origination and propagation patterns of these discharges, yielding important clues to their source location and mechanism of travel. However, systematic studies of methods for measuring propagation are lacking. Approach. We analyzed epileptiform discharges in microelectrode array recordings of human seizures. The array records multiunit activity and local field potentials at 400 micron spatial resolution, from a small cortical site free of obstructions. We evaluated several computationally efficient statistical methods for calculating traveling wave velocity, benchmarking them to analyses of associated neuronal burst firing. Main results. Over 90% of discharges met statistical criteria for propagation across the sampled cortical territory. Detection rate, direction and speed estimates derived from a multiunit estimator were compared to four field potential-based estimators: negative peak, maximum descent, high gamma power, and cross-correlation. Interestingly, the methods that were computationally simplest and most efficient (negative peak and maximal descent) offer non-inferior results in predicting neuronal traveling wave velocities compared to the other two, more complex methods. Moreover, the negative peak and maximal descent methods proved to be more robust against reduced spatial sampling challenges. Using least absolute deviation in place of least squares error minimized the impact of outliers, and reduced the discrepancies between local field potential-based and multiunit estimators. Significance. Our findings suggest that ictal epileptiform discharges typically take the form of exceptionally strong, rapidly traveling waves, with propagation detectable across millimeter distances. The sequential activation of neurons in space can be inferred from clinically-observable EEG data, with a variety of straightforward computation methods available. This opens possibilities for systematic assessments of ictal discharge propagation in clinical and research settings.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; ...
2016-08-24
Here, we study the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with the step-size level h L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levelsmore » $${\\infty}$$ >h 0>h 1 ...>h L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. In conclusion, it is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context.« less
Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela
2016-01-01
In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Jarman, Kristin H.; Harvey, Scott D.
2005-05-28
A fundamental problem in analysis of highly multivariate spectral or chromatographic data is reduction of dimensionality. Principal components analysis (PCA), concerned with explaining the variance-covariance structure of the data, is a commonly used approach to dimension reduction. Recently an attractive alternative to PCA, sequential projection pursuit (SPP), has been introduced. Designed to elicit clustering tendencies in the data, SPP may be more appropriate when performing clustering or classification analysis. However, the existing genetic algorithm (GA) implementation of SPP has two shortcomings, computation time and inability to determine the number of factors necessary to explain the majority of the structure inmore » the data. We address both these shortcomings. First, we introduce a new SPP algorithm, a random scan sampling algorithm (RSSA), that significantly reduces computation time. We compare the computational burden of the RSS and GA implementation for SPP on a dataset containing Raman spectra of twelve organic compounds. Second, we propose a Bayes factor criterion, BFC, as an effective measure for selecting the number of factors needed to explain the majority of the structure in the data. We compare SPP to PCA on two datasets varying in type, size, and difficulty; in both cases SPP achieves a higher accuracy with a lower number of latent variables.« less
Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization
Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.
2008-01-01
The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929
A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.
Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W
2002-01-01
In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.
Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C
2014-04-01
A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.
Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.
Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P
2017-03-01
We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Array-based photoacoustic spectroscopy
Autrey, S. Thomas; Posakony, Gerald J.; Chen, Yu
2005-03-22
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. A photoacoustic spectroscopy sample array including a body having at least three recesses or affinity masses connected thereto is used in conjunction with a photoacoustic spectroscopy system. At least one acoustic detector is positioned near the recesses or affinity masses for detection of acoustic waves emitted from species of interest within the recesses or affinity masses.
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Wf/pc Cycle 2 Calibration: Rapid Internal Monitor - Part 2
NASA Astrophysics Data System (ADS)
MacKenty, John
1991-07-01
This test is to take repeated internal flats to test for contamination buildup on the optical surfaces or the reappearance of QEH. Part 1: INTFLATS in F555W are obtained every 4 days in both WFC and PC to check for measles or daisies and to monitor scattered light. Part 2: Sequential INTFLATS in F439W with PC are obtained every 7 days to check for QEH.
Wf/pc Cycle 1 Calibration: Rapid Internal Monitor
NASA Astrophysics Data System (ADS)
MacKenty, John
1990-12-01
This test is to take repeated internal flats to test for contamination buildup on the optical surfaces or the reappearance of QEH. Part 1: INTFLATS in F555W are obtained every 4 days in both WFC and PC to check for measles or daisies and to monitor scattered light. Part 2: Sequential INTFLATS in F439W with PC are obtained every 7 days to check for QEH.
Wf/pc Cycle 3 Calibration: Rapid Internal Monitor
NASA Astrophysics Data System (ADS)
MacKenty, John
1992-06-01
This test is to take repeated internal flats to test for contamination buildup on the optical surfaces or the reappearance of QEH. Part 1: INTFLATS in F555W are obtained every 4 days in both WFC and PC to check for measles or daisies and to monitor scattered light. Part 2: Sequential INTFLATS in F439W with PC are obtained every 7 days to check for QEH.
Wf/pc Cycle 2 Calibration: Rapid Internal Monitor
NASA Astrophysics Data System (ADS)
MacKenty, John
1991-07-01
This test is to take repeated internal flats to test for contamination buildup on the optical surfaces or the reappearance of QEH. Part 1: INTFLATS in F555W are obtained every 4 days in both WFC and PC to check for measles or daisies and to monitor scattered light. Part 2: Sequential INTFLATS in F439W with PC are obtained every 7 days to check for QEH.
Wf/pc Cycle 3 Calibration: Rapid Internal Monitor - Part 2
NASA Astrophysics Data System (ADS)
MacKenty, John
1992-06-01
This test is to take repeated internal flats to test for contamination buildup on the optical surfaces or the reappearance of QEH. Part 1: INTFLATS in F555W are obtained every 4 days in both WFC and PC to check for measles or daisies and to monitor scattered light. Part 2: Sequential INTFLATS in F439W with PC are obtained every 7 days to check for QEH.
Titanium(IV) isopropoxide mediated synthesis of pyrimidin-4-ones.
Ramanjulu, Joshi M; Demartino, Michael P; Lan, Yunfeng; Marquis, Robert
2010-05-21
A novel, one-step method for the synthesis of tri- and tetrasubstituted pyrimidin-4-ones is reported. This method involves a titanium(IV)-mediated cyclization involving two sequential condensations of primary and beta-ketoamides. The reaction is operationally facile, readily scalable, and offers rapid entry into differentially substituted pyrimidin-4-one scaffolds. The high functional group compatibility allows for substantial diversification in the products generated from this transformation.
Sequential Self-Folding Structures by 3D Printed Digital Shape Memory Polymers
NASA Astrophysics Data System (ADS)
Mao, Yiqi; Yu, Kai; Isakov, Michael S.; Wu, Jiangtao; Dunn, Martin L.; Jerry Qi, H.
2015-09-01
Folding is ubiquitous in nature with examples ranging from the formation of cellular components to winged insects. It finds technological applications including packaging of solar cells and space structures, deployable biomedical devices, and self-assembling robots and airbags. Here we demonstrate sequential self-folding structures realized by thermal activation of spatially-variable patterns that are 3D printed with digital shape memory polymers, which are digital materials with different shape memory behaviors. The time-dependent behavior of each polymer allows the temporal sequencing of activation when the structure is subjected to a uniform temperature. This is demonstrated via a series of 3D printed structures that respond rapidly to a thermal stimulus, and self-fold to specified shapes in controlled shape changing sequences. Measurements of the spatial and temporal nature of self-folding structures are in good agreement with the companion finite element simulations. A simplified reduced-order model is also developed to rapidly and accurately describe the self-folding physics. An important aspect of self-folding is the management of self-collisions, where different portions of the folding structure contact and then block further folding. A metric is developed to predict collisions and is used together with the reduced-order model to design self-folding structures that lock themselves into stable desired configurations.
Terris-Prestholt, Fern; Vickerman, Peter; Torres-Rueda, Sergio; Santesso, Nancy; Sweeney, Sedona; Mallma, Patricia; Shelley, Katharine D; Garcia, Patricia J; Bronzan, Rachel; Gill, Michelle M; Broutet, Nathalie; Wi, Teodora; Watts, Charlotte; Mabey, David; Peeling, Rosanna W; Newman, Lori
2015-06-01
Rapid plasma reagin (RPR) is frequently used to test women for maternal syphilis. Rapid syphilis immunochromatographic strip tests detecting only Treponema pallidum antibodies (single RSTs) or both treponemal and non-treponemal antibodies (dual RSTs) are now available. This study assessed the cost-effectiveness of algorithms using these tests to screen pregnant women. Observed costs of maternal syphilis screening and treatment using clinic-based RPR and single RSTs in 20 clinics across Peru, Tanzania, and Zambia were used to model the cost-effectiveness of algorithms using combinations of RPR, single, and dual RSTs, and no and mass treatment. Sensitivity analyses determined drivers of key results. Although this analysis found screening using RPR to be relatively cheap, most (>70%) true cases went untreated. Algorithms using single RSTs were the most cost-effective in all observed settings, followed by dual RSTs, which became the most cost-effective if dual RST costs were halved. Single test algorithms dominated most sequential testing algorithms, although sequential algorithms reduced overtreatment. Mass treatment was relatively cheap and effective in the absence of screening supplies, though treated many uninfected women. This analysis highlights the advantages of introducing RSTs in three diverse settings. The results should be applicable to other similar settings. Copyright © 2015 International Federation of Gynecology and Obstetrics. All rights reserved.
Terris-Prestholt, Fern; Vickerman, Peter; Torres-Rueda, Sergio; Santesso, Nancy; Sweeney, Sedona; Mallma, Patricia; Shelley, Katharine D.; Garcia, Patricia J.; Bronzan, Rachel; Gill, Michelle M.; Broutet, Nathalie; Wi, Teodora; Watts, Charlotte; Mabey, David; Peeling, Rosanna W.; Newman, Lori
2015-01-01
Objective Rapid plasma reagin (RPR) is frequently used to test women for maternal syphilis. Rapid syphilis immunochromatographic strip tests detecting only Treponema pallidum antibodies (single RSTs) or both treponemal and non-treponemal antibodies (dual RSTs) are now available. This study assessed the cost-effectiveness of algorithms using these tests to screen pregnant women. Methods Observed costs of maternal syphilis screening and treatment using clinic-based RPR and single RSTs in 20 clinics across Peru, Tanzania, and Zambia were used to model the cost-effectiveness of algorithms using combinations of RPR, single, and dual RSTs, and no and mass treatment. Sensitivity analyses determined drivers of key results. Results Although this analysis found screening using RPR to be relatively cheap, most (> 70%) true cases went untreated. Algorithms using single RSTs were the most cost-effective in all observed settings, followed by dual RSTs, which became the most cost-effective if dual RST costs were halved. Single test algorithms dominated most sequential testing algorithms, although sequential algorithms reduced overtreatment. Mass treatment was relatively cheap and effective in the absence of screening supplies, though treated many uninfected women. Conclusion This analysis highlights the advantages of introducing RSTs in three diverse settings. The results should be applicable to other similar settings. PMID:25963907
NASA Technical Reports Server (NTRS)
Schumann, H. H.
1981-01-01
Ground surveys and aerial observations were used to monitor rapidly changing moisture conditions in the Salt-Verde watershed. Repetitive satellite snow cover observations greatly reduce the necessity for routine aerial snow reconnaissance flights over the mountains. High resolution, multispectral imagery provided by LANDSAT satellite series enabled rapid and accurate mapping of snow-cover distributions for small- to medium-sized subwatersheds; however, the imagery provided only one observation every 9 days of about a third of the watershed. Low resolution imagery acquired by the ITOSa dn SMS/GOES meteorological satellite series provides the daily synoptic observation necessary to monitor the rapid changes in snow-covered area in the entire watershed. Short term runoff volumes can be predicted from daily sequential snow cover observations.
The application of rapid prototyping technique in chin augmentation.
Li, Min; Lin, Xin; Xu, Yongchen
2010-04-01
This article discusses the application of computer-aided design and rapid prototyping techniques in prosthetic chin augmentation for mild microgenia. Nine cases of mild microgenia underwent an electrobeam computer tomography scan. Then we performed three-dimensional reconstruction and operative design using computer software. According to the design, we determined the shape and size of the prostheses and made an individualized prosthesis for each chin augmentation with the rapid prototyping technique. With the application of computer-aided design and a rapid prototyping technique, we could determine the shape, size, and embedding location accurately. Prefabricating the individual prosthesis model is useful in improving the accuracy of treatment. In the nine cases of mild microgenia, three received a silicone implant, four received an ePTFE implant, and two received a Medpor implant. All patients were satisfied with the results. During follow-up at 6-12 months, all patients remained satisfied. The application of computer-aided design and rapid prototyping techniques can offer surgeons the ability to design an individualized ideal prosthesis for each patient.
Hughes, Rachel R; Scown, David; Lenehan, Claire E
2015-01-01
Plant extracts containing high levels of antioxidants are desirable due to their reported health benefits. Most techniques capable of determining the antioxidant activity of plant extracts are unsuitable for rapid at-line analysis as they require extensive sample preparation and/or long analysis times. Therefore, analytical techniques capable of real-time or pseudo real-time at-line monitoring of plant extractions, and determination of extraction endpoints, would be useful to manufacturers of antioxidant-rich plant extracts. To develop a reliable method for the rapid at-line extraction monitoring of antioxidants in plant extracts. Calendula officinalis extracts were prepared from dried flowers and analysed for antioxidant activity using sequential injection analysis (SIA) with chemiluminescence (CL) detection. The intensity of CL emission from the reaction of acidic potassium permanganate with antioxidants within the extract was used as the analytical signal. The SIA-CL method was applied to monitor the extraction of C. officinalis over the course of a batch extraction to determine the extraction endpoint. Results were compared with those from ultra high performance liquid chromatography (UHPLC). Pseudo real-time, at-line monitoring showed the level of antioxidants in a batch extract of Calendula officinalis plateaued after 100 min of extraction. These results correlated well with those of an offline UHPLC study. SIA-CL was found to be a suitable method for pseudo real-time monitoring of plant extractions and determination of extraction endpoints with respect to antioxidant concentrations. The method was applied at-line in the manufacturing industry. Copyright © 2015 John Wiley & Sons, Ltd.
Nguyen, Mary -Anne; Srijanto, Bernadeta; Collier, C. Patrick; ...
2016-08-02
The droplet interface bilayer (DIB) is a modular technique for assembling planar lipid membranes between water droplets in oil. The DIB method thus provides a unique capability for developing digital, droplet-based membrane platforms for rapid membrane characterization, drug screening and ion channel recordings. This paper demonstrates a new, low-volume microfluidic system that automates droplet generation, sorting, and sequential trapping in designated locations to enable the rapid assembly of arrays of DIBs. The channel layout of the device is guided by an equivalent circuit model, which predicts that a serial arrangement of hydrodynamic DIB traps enables sequential droplet placement and minimizesmore » the hydrodynamic pressure developed across filled traps to prevent squeeze-through of trapped droplets. Furthermore, the incorporation of thin-film electrodes fabricated via evaporation metal deposition onto the glass substrate beneath the channels allows for the first time in situ, simultaneous electrical interrogation of multiple DIBs within a sealed device. Combining electrical measurements with imaging enables measurements of membrane capacitance and resistance and bilayer area, and our data show that DIBs formed in different trap locations within the device exhibit similar sizes and transport properties. Simultaneous, single channel recordings of ion channel gating in multiple membranes are obtained when alamethicin peptides are incorporated into the captured droplets, qualifying the thin-film electrodes as a means for measuring stimuli-responsive functions of membrane-bound biomolecules. Furthermore, this novel microfluidic-electrophysiology platform provides a reproducible, high throughput method for performing electrical measurements to study transmembrane proteins and biomembranes in low-volume, droplet-based membranes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nguyen, Mary -Anne; Srijanto, Bernadeta; Collier, C. Patrick
The droplet interface bilayer (DIB) is a modular technique for assembling planar lipid membranes between water droplets in oil. The DIB method thus provides a unique capability for developing digital, droplet-based membrane platforms for rapid membrane characterization, drug screening and ion channel recordings. This paper demonstrates a new, low-volume microfluidic system that automates droplet generation, sorting, and sequential trapping in designated locations to enable the rapid assembly of arrays of DIBs. The channel layout of the device is guided by an equivalent circuit model, which predicts that a serial arrangement of hydrodynamic DIB traps enables sequential droplet placement and minimizesmore » the hydrodynamic pressure developed across filled traps to prevent squeeze-through of trapped droplets. Furthermore, the incorporation of thin-film electrodes fabricated via evaporation metal deposition onto the glass substrate beneath the channels allows for the first time in situ, simultaneous electrical interrogation of multiple DIBs within a sealed device. Combining electrical measurements with imaging enables measurements of membrane capacitance and resistance and bilayer area, and our data show that DIBs formed in different trap locations within the device exhibit similar sizes and transport properties. Simultaneous, single channel recordings of ion channel gating in multiple membranes are obtained when alamethicin peptides are incorporated into the captured droplets, qualifying the thin-film electrodes as a means for measuring stimuli-responsive functions of membrane-bound biomolecules. Furthermore, this novel microfluidic-electrophysiology platform provides a reproducible, high throughput method for performing electrical measurements to study transmembrane proteins and biomembranes in low-volume, droplet-based membranes.« less
Methodology of modeling and measuring computer architectures for plasma simulations
NASA Technical Reports Server (NTRS)
Wang, L. P. T.
1977-01-01
A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-11-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.
Kawano, Tomonori; Bouteau, François; Mancuso, Stefano
2012-01-01
The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016
NASA Technical Reports Server (NTRS)
Razzaq, Zia; Prasad, Venkatesh
1988-01-01
The results of a detailed investigation of the distribution of stresses in aluminum and composite panels subjected to uniform end shortening are presented. The focus problem is a rectangular panel with two longitudinal stiffeners, and an inner stiffener discontinuous at a central hole in the panel. The influence of the stiffeners on the stresses is evaluated through a two-dimensional global finite element analysis in the absence or presence of the hole. Contrary to the physical feel, it is found that the maximum stresses from the glocal analysis for both stiffened aluminum and composite panels are greater than the corresponding stresses for the unstiffened panels. The inner discontinuous stiffener causes a greater increase in stresses than the reduction provided by the two outer stiffeners. A detailed layer-by-layer study of stresses around the hole is also presented for both unstiffened and stiffened composite panels. A parallel equation solver is used for the global system of equations since the computational time is far less than that using a sequential scheme. A parallel Choleski method with up to 16 processors is used on Flex/32 Multicomputer at NASA Langley Research Center. The parallel computing results are summarized and include the computational times, speedups, bandwidths, and their inter-relationships for the panel problems. It is found that the computational time for the Choleski method decreases with a decrease in bandwidth, and better speedups result as the bandwidth increases.
NASA Astrophysics Data System (ADS)
Dickens, J. K.
1991-04-01
The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d(sigma)/dE, following nonelastic neutron interactions with C-12 for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed.
ERIC Educational Resources Information Center
Hsu, Ting-Chia
2018-01-01
To stimulate classroom interactions, this study employed two different smartphone application modes, providing an additional instant interaction channel in a flipped classroom teaching fundamental computer science concepts. One instant interaction mode provided the students (N = 36) with anonymous feedback in chronological time sequence, while the…
A Computational Study of the Modification of Raindrop Size Distributions in Subcloud Downdrafts.
1981-08-01
Schumann (1939) and Findeisen (1939). It is characterized by a raindrop distribution consisting of many small droplets and a few large drops; all large...rain by means of sequential rain drop size distributions. Quart. J. Roy. Meteor. Soc., 88, 301-314. Findeisen , W., 1939: Zur Frage der
Support for Debugging Automatically Parallelized Programs
NASA Technical Reports Server (NTRS)
Hood, Robert; Jost, Gabriele
2001-01-01
This viewgraph presentation provides information on support sources available for the automatic parallelization of computer program. CAPTools, a support tool developed at the University of Greenwich, transforms, with user guidance, existing sequential Fortran code into parallel message passing code. Comparison routines are then run for debugging purposes, in essence, ensuring that the code transformation was accurate.
Smart Occupancy Sensor Debuts - Continuum Magazine | NREL
occupancy, IPOS uses sequential image subtractions like this one for extracting and analyzing motion building energy performance. Noted as one of the 100 most significant innovations of 2013 by R&D device where one edge of the IC is exposed. In the background is a blackboard on which computer
Self-Controlled Practice Enhances Motor Learning in Introverts and Extroverts
ERIC Educational Resources Information Center
Kaefer, Angélica; Chiviacowsky, Suzete; Meira, Cassio de Miranda, Jr.; Tani, Go
2014-01-01
Purpose: The purpose of the present study was to investigate the effects of self-controlled feedback on the learning of a sequential-timing motor task in introverts and extroverts. Method: Fifty-six university students were selected by the Eysenck Personality Questionnaire. They practiced a motor task consisting of pressing computer keyboard keys…
Robustness of Ability Estimation to Multidimensionality in CAST with Implications to Test Assembly
ERIC Educational Resources Information Center
Zhang, Yanwei; Nandakumar, Ratna
2006-01-01
Computer Adaptive Sequential Testing (CAST) is a test delivery model that combines features of the traditional conventional paper-and-pencil testing and item-based computerized adaptive testing (CAT). The basic structure of CAST is a panel composed of multiple testlets adaptively administered to examinees at different stages. Current applications…
Landscape analysis software tools
Don Vandendriesche
2008-01-01
Recently, several new computer programs have been developed to assist in landscape analysis. The âSequential Processing Routine for Arraying Yieldsâ (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...
Neural Bases of Sequence Processing in Action and Language
ERIC Educational Resources Information Center
Carota, Francesca; Sirigu, Angela
2008-01-01
Real-time estimation of what we will do next is a crucial prerequisite of purposive behavior. During the planning of goal-oriented actions, for instance, the temporal and causal organization of upcoming subsequent moves needs to be predicted based on our knowledge of events. A forward computation of sequential structure is also essential for…
Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.
Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin
2015-01-01
Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.
Computer vision for driver assistance systems
NASA Astrophysics Data System (ADS)
Handmann, Uwe; Kalinke, Thomas; Tzomakas, Christos; Werner, Martin; von Seelen, Werner
1998-07-01
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Lanczos eigensolution method for high-performance computers
NASA Technical Reports Server (NTRS)
Bostic, Susan W.
1991-01-01
The theory, computational analysis, and applications are presented of a Lanczos algorithm on high performance computers. The computationally intensive steps of the algorithm are identified as: the matrix factorization, the forward/backward equation solution, and the matrix vector multiples. These computational steps are optimized to exploit the vector and parallel capabilities of high performance computers. The savings in computational time from applying optimization techniques such as: variable band and sparse data storage and access, loop unrolling, use of local memory, and compiler directives are presented. Two large scale structural analysis applications are described: the buckling of a composite blade stiffened panel with a cutout, and the vibration analysis of a high speed civil transport. The sequential computational time for the panel problem executed on a CONVEX computer of 181.6 seconds was decreased to 14.1 seconds with the optimized vector algorithm. The best computational time of 23 seconds for the transport problem with 17,000 degs of freedom was on the the Cray-YMP using an average of 3.63 processors.
Method for rapid base sequencing in DNA and RNA
Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.
1987-10-07
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.
Progressive outer retinal necrosis presenting as cherry red spot.
Yiu, Glenn; Young, Lucy H
2012-10-01
To report a case of progressive outer retinal necrosis (PORN) presenting as a cherry red spot. Case report. A 53-year-old woman with recently diagnosed HIV and varicella-zoster virus (VZV) aseptic meningitis developed rapid sequential vision loss in both eyes over 2 months. Her exam showed a "cherry red spot" in both maculae with peripheral atrophy and pigmentary changes, consistent with PORN. Due to her late presentation and the rapid progression of her condition, she quickly developed end-stage vision loss in both eyes. PORN should be considered within the differential diagnosis of a "cherry red spot." Immune-deficient patients with a history of herpetic infection who present with visual loss warrant prompt ophthalmological evaluation.
Method for rapid base sequencing in DNA and RNA
Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.
1990-10-09
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.
Method for rapid base sequencing in DNA and RNA
Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.
1990-01-01
A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.
Device for rapid quantification of human carotid baroreceptor-cardiac reflex responses
NASA Technical Reports Server (NTRS)
Sprenkle, J. M.; Eckberg, D. L.; Goble, R. L.; Schelhorn, J. J.; Halliday, H. C.
1986-01-01
A new device has been designed, constructed, and evaluated to characterize the human carotid baroreceptor-cardiac reflex response relation rapidly. This system was designed for study of reflex responses of astronauts before, during, and after space travel. The system comprises a new tightly sealing silicon rubber neck chamber, a stepping motor-driven electrodeposited nickel bellows pressure system, capable of delivering sequential R-wave-triggered neck chamber pressure changes between +40 and -65 mmHg, and a microprocessor-based electronics system for control of pressure steps and analysis and display of responses. This new system provokes classic sigmoid baroreceptor-cardiac reflex responses with threshold, linear, and saturation ranges in most human volunteers during one held expiration.
Efficient sequential and parallel algorithms for record linkage
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Background and objective Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Methods Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Results Our sequential and parallel algorithms have been tested on a real dataset of 1 083 878 records and synthetic datasets ranging in size from 50 000 to 9 000 000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). Conclusions We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm. PMID:24154837
Hou, Kun; Zhao, Jinchuan; Zhang, Yang; Zhu, Xiaobo; Zhao, Yan; Li, Guichen
2016-05-01
Simultaneous or early sequential rupture of multiple intracranial aneurysms (MIAs) is encountered rarely, with no more than 10 cases having been reported. As a result of its rarity, there are a lot of questions concerning this entity need to be answered. A 67-year-old woman was admitted to the First Hospital of Jilin University (Eastern Division) from a local hospital after a sudden onset of severe headache, nausea, and vomiting. Head computed tomography (CT) at the local hospital revealed diffuse subarachnoid hemorrhage (SAH) that was concentrated predominately in the suprasellar cistern and interhemispheric fissure. During her transfer to our hospital, she experienced another episode of sudden headache. CT on admission to our hospital revealed that the SAH was increased with 2 isolated hematomas both in the interhemispheric fissure and the left paramedian frontal lobe. Further CT angiography and intraoperative findings were in favor of early sequential rupture of 2 intracranial aneurysms. To further elucidate the characteristics, mechanism, management, and prognosis of this specific entity, we conducted a comprehensive review of the literature. The mechanism of simultaneous or early sequential rupture of MIAs is still obscure. Transient elevation of blood pressure might play a role in the process, and preventing the sudden elevation of blood pressure might be beneficial for patients with aneurysmal SAH and MIAs. The management of simultaneously or early sequentially ruptured aneurysms is more complex for its difficulty in responsible aneurysm determination, urgency in treatment, toughness in intraoperative manipulation and poorness in prognosis. Copyright © 2016 Elsevier Inc. All rights reserved.
The application of neural networks to myoelectric signal analysis: a preliminary study.
Kelly, M F; Parker, P A; Scott, R N
1990-03-01
Two neural network implementations are applied to myoelectric signal (MES) analysis tasks. The motivation behind this research is to explore more reliable methods of deriving control for multidegree of freedom arm prostheses. A discrete Hopfield network is used to calculate the time series parameters for a moving average MES model. It is demonstrated that the Hopfield network is capable of generating the same time series parameters as those produced by the conventional sequential least squares (SLS) algorithm. Furthermore, it can be extended to applications utilizing larger amounts of data, and possibly to higher order time series models, without significant degradation in computational efficiency. The second neural network implementation involves using a two-layer perceptron for classifying a single site MES based on two features, specifically the first time series parameter, and the signal power. Using these features, the perceptron is trained to distinguish between four separate arm functions. The two-dimensional decision boundaries used by the perceptron classifier are delineated. It is also demonstrated that the perceptron is able to rapidly compensate for variations when new data are incorporated into the training set. This adaptive quality suggests that perceptrons may provide a useful tool for future MES analysis.
Benchmarking Ligand-Based Virtual High-Throughput Screening with the PubChem Database
Butkiewicz, Mariusz; Lowe, Edward W.; Mueller, Ralf; Mendenhall, Jeffrey L.; Teixeira, Pedro L.; Weaver, C. David; Meiler, Jens
2013-01-01
With the rapidly increasing availability of High-Throughput Screening (HTS) data in the public domain, such as the PubChem database, methods for ligand-based computer-aided drug discovery (LB-CADD) have the potential to accelerate and reduce the cost of probe development and drug discovery efforts in academia. We assemble nine data sets from realistic HTS campaigns representing major families of drug target proteins for benchmarking LB-CADD methods. Each data set is public domain through PubChem and carefully collated through confirmation screens validating active compounds. These data sets provide the foundation for benchmarking a new cheminformatics framework BCL::ChemInfo, which is freely available for non-commercial use. Quantitative structure activity relationship (QSAR) models are built using Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), Decision Trees (DTs), and Kohonen networks (KNs). Problem-specific descriptor optimization protocols are assessed including Sequential Feature Forward Selection (SFFS) and various information content measures. Measures of predictive power and confidence are evaluated through cross-validation, and a consensus prediction scheme is tested that combines orthogonal machine learning algorithms into a single predictor. Enrichments ranging from 15 to 101 for a TPR cutoff of 25% are observed. PMID:23299552
Fast Edge Detection and Segmentation of Terrestrial Laser Scans Through Normal Variation Analysis
NASA Astrophysics Data System (ADS)
Che, E.; Olsen, M. J.
2017-09-01
Terrestrial Laser Scanning (TLS) utilizes light detection and ranging (lidar) to effectively and efficiently acquire point cloud data for a wide variety of applications. Segmentation is a common procedure of post-processing to group the point cloud into a number of clusters to simplify the data for the sequential modelling and analysis needed for most applications. This paper presents a novel method to rapidly segment TLS data based on edge detection and region growing. First, by computing the projected incidence angles and performing the normal variation analysis, the silhouette edges and intersection edges are separated from the smooth surfaces. Then a modified region growing algorithm groups the points lying on the same smooth surface. The proposed method efficiently exploits the gridded scan pattern utilized during acquisition of TLS data from most sensors and takes advantage of parallel programming to process approximately 1 million points per second. Moreover, the proposed segmentation does not require estimation of the normal at each point, which limits the errors in normal estimation propagating to segmentation. Both an indoor and outdoor scene are used for an experiment to demonstrate and discuss the effectiveness and robustness of the proposed segmentation method.
Computer-generated holograms by multiple wavefront recording plane method with occlusion culling.
Symeonidou, Athanasia; Blinder, David; Munteanu, Adrian; Schelkens, Peter
2015-08-24
We propose a novel fast method for full parallax computer-generated holograms with occlusion processing, suitable for volumetric data such as point clouds. A novel light wave propagation strategy relying on the sequential use of the wavefront recording plane method is proposed, which employs look-up tables in order to reduce the computational complexity in the calculation of the fields. Also, a novel technique for occlusion culling with little additional computation cost is introduced. Additionally, the method adheres a Gaussian distribution to the individual points in order to improve visual quality. Performance tests show that for a full-parallax high-definition CGH a speedup factor of more than 2,500 compared to the ray-tracing method can be achieved without hardware acceleration.
Introduction of Parallel GPGPU Acceleration Algorithms for the Solution of Radiative Transfer
NASA Technical Reports Server (NTRS)
Godoy, William F.; Liu, Xu
2011-01-01
General-purpose computing on graphics processing units (GPGPU) is a recent technique that allows the parallel graphics processing unit (GPU) to accelerate calculations performed sequentially by the central processing unit (CPU). To introduce GPGPU to radiative transfer, the Gauss-Seidel solution of the well-known expressions for 1-D and 3-D homogeneous, isotropic media is selected as a test case. Different algorithms are introduced to balance memory and GPU-CPU communication, critical aspects of GPGPU. Results show that speed-ups of one to two orders of magnitude are obtained when compared to sequential solutions. The underlying value of GPGPU is its potential extension in radiative solvers (e.g., Monte Carlo, discrete ordinates) at a minimal learning curve.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
An analog scrambler for speech based on sequential permutations in time and frequency
NASA Astrophysics Data System (ADS)
Cox, R. V.; Jayant, N. S.; McDermott, B. J.
Permutation of speech segments is an operation that is frequently used in the design of scramblers for analog speech privacy. In this paper, a sequential procedure for segment permutation is considered. This procedure can be extended to two dimensional permutation of time segments and frequency bands. By subjective testing it is shown that this combination gives a residual intelligibility for spoken digits of 20 percent with a delay of 256 ms. (A lower bound for this test would be 10 percent). The complexity of implementing such a system is considered and the issues of synchronization and channel equalization are addressed. The computer simulation results for the system using both real and simulated channels are examined.
Unsupervised classification of remote multispectral sensing data
NASA Technical Reports Server (NTRS)
Su, M. Y.
1972-01-01
The new unsupervised classification technique for classifying multispectral remote sensing data which can be either from the multispectral scanner or digitized color-separation aerial photographs consists of two parts: (a) a sequential statistical clustering which is a one-pass sequential variance analysis and (b) a generalized K-means clustering. In this composite clustering technique, the output of (a) is a set of initial clusters which are input to (b) for further improvement by an iterative scheme. Applications of the technique using an IBM-7094 computer on multispectral data sets over Purdue's Flight Line C-1 and the Yellowstone National Park test site have been accomplished. Comparisons between the classification maps by the unsupervised technique and the supervised maximum liklihood technique indicate that the classification accuracies are in agreement.
NASA Astrophysics Data System (ADS)
Li, Shuang; Zhu, Yongsheng; Wang, Yukai
2014-02-01
Asteroid deflection techniques are essential in order to protect the Earth from catastrophic impacts by hazardous asteroids. Rapid design and optimization of low-thrust rendezvous/interception trajectories is considered as one of the key technologies to successfully deflect potentially hazardous asteroids. In this paper, we address a general framework for the rapid design and optimization of low-thrust rendezvous/interception trajectories for future asteroid deflection missions. The design and optimization process includes three closely associated steps. Firstly, shape-based approaches and genetic algorithm (GA) are adopted to perform preliminary design, which provides a reasonable initial guess for subsequent accurate optimization. Secondly, Radau pseudospectral method is utilized to transcribe the low-thrust trajectory optimization problem into a discrete nonlinear programming (NLP) problem. Finally, sequential quadratic programming (SQP) is used to efficiently solve the nonlinear programming problem and obtain the optimal low-thrust rendezvous/interception trajectories. The rapid design and optimization algorithms developed in this paper are validated by three simulation cases with different performance indexes and boundary constraints.
NASA Astrophysics Data System (ADS)
Ramgraber, M.; Schirmer, M.
2017-12-01
As computational power grows and wireless sensor networks find their way into common practice, it becomes increasingly feasible to pursue on-line numerical groundwater modelling. The reconciliation of model predictions with sensor measurements often necessitates the application of Sequential Monte Carlo (SMC) techniques, most prominently represented by the Ensemble Kalman Filter. In the pursuit of on-line predictions it seems advantageous to transcend the scope of pure data assimilation and incorporate on-line parameter calibration as well. Unfortunately, the interplay between shifting model parameters and transient states is non-trivial. Several recent publications (e.g. Chopin et al., 2013, Kantas et al., 2015) in the field of statistics discuss potential algorithms addressing this issue. However, most of these are computationally intractable for on-line application. In this study, we investigate to what extent compromises between mathematical rigour and computational restrictions can be made within the framework of on-line numerical modelling of groundwater. Preliminary studies are conducted in a synthetic setting, with the goal of transferring the conclusions drawn into application in a real-world setting. To this end, a wireless sensor network has been established in the valley aquifer around Fehraltorf, characterized by a highly dynamic groundwater system and located about 20 km to the East of Zürich, Switzerland. By providing continuous probabilistic estimates of the state and parameter distribution, a steady base for branched-off predictive scenario modelling could be established, providing water authorities with advanced tools for assessing the impact of groundwater management practices. Chopin, N., Jacob, P.E. and Papaspiliopoulos, O. (2013): SMC2: an efficient algorithm for sequential analysis of state space models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75 (3), p. 397-426. Kantas, N., Doucet, A., Singh, S.S., Maciejowski, J., and Chopin, N. (2015): On Particle Methods for Parameter Estimation in State-Space Models. Statistical Science, 30 (3), p. 328.-351.
A sequential coalescent algorithm for chromosomal inversions
Peischl, S; Koch, E; Guerrero, R F; Kirkpatrick, M
2013-01-01
Chromosomal inversions are common in natural populations and are believed to be involved in many important evolutionary phenomena, including speciation, the evolution of sex chromosomes and local adaptation. While recent advances in sequencing and genotyping methods are leading to rapidly increasing amounts of genome-wide sequence data that reveal interesting patterns of genetic variation within inverted regions, efficient simulation methods to study these patterns are largely missing. In this work, we extend the sequential Markovian coalescent, an approximation to the coalescent with recombination, to include the effects of polymorphic inversions on patterns of recombination. Results show that our algorithm is fast, memory-efficient and accurate, making it feasible to simulate large inversions in large populations for the first time. The SMC algorithm enables studies of patterns of genetic variation (for example, linkage disequilibria) and tests of hypotheses (using simulation-based approaches) that were previously intractable. PMID:23632894
Amyloid Precursor Protein Processing and Alzheimer’s Disease
O’Brien, Richard J.; Wong, Philip C.
2011-01-01
Alzheimer’s disease (AD), the leading cause of dementia worldwide, is characterized by the accumulation of the β-amyloid peptide (Aβ) within the brain along with hyperphosphorylated and cleaved forms of the microtubule-associated protein tau. Genetic, biochemical, and behavioral research suggest that physiologic generation of the neurotoxic Aβ peptide from sequential amyloid precursor protein (APP) proteolysis is the crucial step in the development of AD. APP is a single-pass transmembrane protein expressed at high levels in the brain and metabolized in a rapid and highly complex fashion by a series of sequential proteases, including the intramembranous γ-secretase complex, which also process other key regulatory molecules. Why Aβ accumulates in the brains of elderly individuals is unclear but could relate to changes in APP metabolism or Aβ elimination. Lessons learned from biochemical and genetic studies of APP processing will be crucial to the development of therapeutic targets to treat AD. PMID:21456963
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Schöfer, Helmut; Tatti, Silvio; Lynde, Charles W; Skerlev, Mihael; Hercogová, Jana; Rotaru, Maria; Ballesteros, Juan; Calzavara-Pinton, Piergiacomo
2017-12-01
This review about the proactive sequential therapy (PST) of external genital and perianal warts (EGW) is based on the most current available clinical literature and on the broad clinical experience of a group of international experts, physicians who are well versed in the treatment of human papillomavirus-associated diseases. It provides a practical guide for the treatment of EGW, including epidemiology, etiology, clinical appearance, and diagnostic procedures for these viral infections. Furthermore, the treatment goals and current treatment options, elucidating provider- and patient-applied therapies, and the parameters driving treatment decisions are summarized. Specifically, the mode of action of the topical treatments sinecatechins and imiquimod, as well as the PST for EGW to achieve rapid and sustained clearance is discussed. The group of experts has developed a treatment algorithm giving healthcare providers a practical tool for the treatment of EGW which is very valuable in the presence of many different treatment options.
Chen, Xi; Chen, Xiuxia; Wan, Xianwei; Weng, Boqi; Huang, Qin
2010-12-01
Both live plants and dried straw of water hyacinth were applied to a sequential treatment of swine wastewater for nitrogen and phosphorus reduction. In the facultative tank, the straw behaved as a kind of adsorbent toward phosphorus. Its phosphorus removal rate varied considerably with contact time between the straw and the influent. In the laboratory, the straw displayed a rapid total phosphorus reduction on a KH(2)PO(4) solution. The adsorption efficiency was about 36% upon saturation. At the same time, the water hyacinth straw in the facultative tank enhanced NH(3)-N removal efficiency as well. However, no adsorption was evident. This study demonstrated an economically feasible means to apply water hyacinth phosphorus straw for the swine wastewater treatment. The sequential system employed significantly reduced the land use, as compared to the wastewater stabilization pond treatment, for pollution amelioration of swine waste. 2010 Elsevier Ltd. All rights reserved.
Increasing processor utilization during parallel computation rundown
NASA Technical Reports Server (NTRS)
Jones, W. H.
1986-01-01
Some parallel processing environments provide for asynchronous execution and completion of general purpose parallel computations from a single computational phase. When all the computations from such a phase are complete, a new parallel computational phase is begun. Depending upon the granularity of the parallel computations to be performed, there may be a shortage of available work as a particular computational phase draws to a close (computational rundown). This can result in the waste of computing resources and the delay of the overall problem. In many practical instances, strict sequential ordering of phases of parallel computation is not totally required. In such cases, the beginning of one phase can be correctly computed before the end of a previous phase is completed. This allows additional work to be generated somewhat earlier to keep computing resources busy during each computational rundown. The conditions under which this can occur are identified and the frequency of occurrence of such overlapping in an actual parallel Navier-Stokes code is reported. A language construct is suggested and possible control strategies for the management of such computational phase overlapping are discussed.
PETSc Users Manual Revision 3.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Brown, J.; Buschelman, K.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself; For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
PETSc Users Manual Revision 3.5
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balay, S.; Abhyankar, S.; Adams, M.
This manual describes the use of PETSc for the numerical solution of partial differential equations and related problems on high-performance computers. The Portable, Extensible Toolkit for Scientific Computation (PETSc) is a suite of data structures and routines that provide the building blocks for the implementation of large-scale application codes on parallel (and serial) computers. PETSc uses the MPI standard for all message-passing communication. PETSc includes an expanding suite of parallel linear, nonlinear equation solvers and time integrators that may be used in application codes written in Fortran, C, C++, Python, and MATLAB (sequential). PETSc provides many of the mechanisms neededmore » within parallel application codes, such as parallel matrix and vector assembly routines. The library is organized hierarchically, enabling users to employ the level of abstraction that is most appropriate for a particular problem. By using techniques of object-oriented programming, PETSc provides enormous flexibility for users. PETSc is a sophisticated set of software tools; as such, for some users it initially has a much steeper learning curve than a simple subroutine library. In particular, for individuals without some computer science background, experience programming in C, C++ or Fortran and experience using a debugger such as gdb or dbx, it may require a significant amount of time to take full advantage of the features that enable efficient software use. However, the power of the PETSc design and the algorithms it incorporates may make the efficient implementation of many application codes simpler than “rolling them” yourself. ;For many tasks a package such as MATLAB is often the best tool; PETSc is not intended for the classes of problems for which effective MATLAB code can be written. PETSc also has a MATLAB interface, so portions of your code can be written in MATLAB to “try out” the PETSc solvers. The resulting code will not be scalable however because currently MATLAB is inherently not scalable; and PETSc should not be used to attempt to provide a “parallel linear solver” in an otherwise sequential code. Certainly all parts of a previously sequential code need not be parallelized but the matrix generation portion must be parallelized to expect any kind of reasonable performance. Do not expect to generate your matrix sequentially and then “use PETSc” to solve the linear system in parallel. Since PETSc is under continued development, small changes in usage and calling sequences of routines will occur. PETSc is supported; see the web site http://www.mcs.anl.gov/petsc for information on contacting support. A http://www.mcs.anl.gov/petsc/publications may be found a list of publications and web sites that feature work involving PETSc. We welcome any reports of corrections for this document.« less
Borman, Andrew M; Fraser, Mark; Linton, Christopher J; Palmer, Michael D; Johnson, Elizabeth M
2010-06-01
Here, we present a significantly improved version of our previously published method for the extraction of fungal genomic DNA from pure cultures using Whatman FTA filter paper matrix technology. This modified protocol is extremely rapid, significantly more cost effective than our original method, and importantly, substantially reduces the problem of potential cross-contamination between sequential filters when employing FTA technology.
Synthesis of substituted isoquinolines utilizing palladium-catalyzed α-arylation of ketones
Donohoe, Timothy J.; Pilgrim, Ben S.; Jones, Geraint R.; Bassuto, José A.
2012-01-01
The utilization of sequential palladium-catalyzed α-arylation and cyclization reactions provides a general approach to an array of isoquinolines and their corresponding N-oxides. This methodology allows the convergent combination of readily available precursors in a regioselective manner and in excellent overall yields. This powerful route to polysubstituted isoquinolines, which is not limited to electron rich moieties, also allows rapid access to analogues of biologically active compounds. PMID:22753504
The Advanced Software Development and Commercialization Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallopoulos, E.; Canfield, T.R.; Minkoff, M.
1990-09-01
This is the first of a series of reports pertaining to progress in the Advanced Software Development and Commercialization Project, a joint collaborative effort between the Center for Supercomputing Research and Development of the University of Illinois and the Computing and Telecommunications Division of Argonne National Laboratory. The purpose of this work is to apply techniques of parallel computing that were pioneered by University of Illinois researchers to mature computational fluid dynamics (CFD) and structural dynamics (SD) computer codes developed at Argonne. The collaboration in this project will bring this unique combination of expertise to bear, for the first time,more » on industrially important problems. By so doing, it will expose the strengths and weaknesses of existing techniques for parallelizing programs and will identify those problems that need to be solved in order to enable wide spread production use of parallel computers. Secondly, the increased efficiency of the CFD and SD codes themselves will enable the simulation of larger, more accurate engineering models that involve fluid and structural dynamics. In order to realize the above two goals, we are considering two production codes that have been developed at ANL and are widely used by both industry and Universities. These are COMMIX and WHAMS-3D. The first is a computational fluid dynamics code that is used for both nuclear reactor design and safety and as a design tool for the casting industry. The second is a three-dimensional structural dynamics code used in nuclear reactor safety as well as crashworthiness studies. These codes are currently available for both sequential and vector computers only. Our main goal is to port and optimize these two codes on shared memory multiprocessors. In so doing, we shall establish a process that can be followed in optimizing other sequential or vector engineering codes for parallel processors.« less
Inertial navigation sensor integrated obstacle detection system
NASA Technical Reports Server (NTRS)
Bhanu, Bir (Inventor); Roberts, Barry A. (Inventor)
1992-01-01
A system that incorporates inertial sensor information into optical flow computations to detect obstacles and to provide alternative navigational paths free from obstacles. The system is a maximally passive obstacle detection system that makes selective use of an active sensor. The active detection typically utilizes a laser. Passive sensor suite includes binocular stereo, motion stereo and variable fields-of-view. Optical flow computations involve extraction, derotation and matching of interest points from sequential frames of imagery, for range interpolation of the sensed scene, which in turn provides obstacle information for purposes of safe navigation.
A model for architectural comparison
NASA Astrophysics Data System (ADS)
Ho, Sam; Snyder, Larry
1988-04-01
Recently, architectures for sequential computers became a topic of much discussion and controversy. At the center of this storm is the Reduced Instruction Set Computer, or RISC, first described at Berkeley in 1980. While the merits of the RISC architecture cannot be ignored, its opponents have tried to do just that, while its proponents have expanded and frequently exaggerated them. This state of affairs has persisted to this day. No attempt is made to settle this controversy, since indeed there is likely no one answer. A qualitative framework is provided for a rational discussion of the issues.
Implementation of logic functions and computations by chemical kinetics
NASA Astrophysics Data System (ADS)
Hjelmfelt, A.; Ross, J.
We review our work on the computational functions of the kinetics of chemical networks. We examine spatially homogeneous networks which are based on prototypical reactions occurring in living cells and show the construction of logic gates and sequential and parallel networks. This work motivates the study of an important biochemical pathway, glycolysis, and we demonstrate that the switch that controls the flux in the direction of glycolysis or gluconeogenesis may be described as a fuzzy AND operator. We also study a spatially inhomogeneous network which shares features of theoretical and biological neural networks.
Recursive solution of number of reachable states of a simple subclass of FMS
NASA Astrophysics Data System (ADS)
Chao, Daniel Yuh
2014-03-01
This paper aims to compute the number of reachable (forbidden, live and deadlock) states for flexible manufacturing systems (FMS) without the construction of reachability graph. The problem is nontrivial and takes, in general, an exponential amount of time to solve. Hence, this paper focusses on a simple version of Systems of Simple Sequential Processes with Resources (S3PR), called kth-order system, where each resource place holds one token to be shared between two processes. The exact number of reachable (forbidden, live and deadlock) states can be computed recursively.
Bayesian design of decision rules for failure detection
NASA Technical Reports Server (NTRS)
Chow, E. Y.; Willsky, A. S.
1984-01-01
The formulation of the decision making process of a failure detection algorithm as a Bayes sequential decision problem provides a simple conceptualization of the decision rule design problem. As the optimal Bayes rule is not computable, a methodology that is based on the Bayesian approach and aimed at a reduced computational requirement is developed for designing suboptimal rules. A numerical algorithm is constructed to facilitate the design and performance evaluation of these suboptimal rules. The result of applying this design methodology to an example shows that this approach is potentially a useful one.
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
NASA Astrophysics Data System (ADS)
Noda, Akemi; Takahama, Tsutomu; Kawasato, Takeshi; Matsu'ura, Mitsuhiro
2018-02-01
On the 11th March 2011, a megathrust event, called the Tohoku-oki earthquake, occurred at the North American-Pacific plate interface off northeast Japan. Transient crustal movements following this earthquake were clearly observed by a dense GPS network (GEONET) on land and a sparse GPS/Acoustic positioning network on seafloor. The observed crustal movements are in accordance with ordinary expectations on land, but not on seafloor; that is, slowly decaying landward movements above the main rupture area and rapidly decaying trench-ward movements in its southern extension. To reveal the cause of such curious offshore crustal movements, we analyzed the coseismic and postseismic GPS array data on land with a sequential stepwise inversion method considering viscoelastic stress relaxation in the asthenosphere, and obtained the following results: The afterslip of the Tohoku-oki earthquake rapidly proceeds for the first 1 year on a high-angle downdip extension of the main rupture, which occurred on the low-angle offshore plate interface. The theoretical patterns of seafloor horizontal movements due to the afterslip and the viscoelastic relaxation of coseismic stress changes in the asthenosphere are essentially different both in space and time; inshore trench-ward movements and offshore landward movements for the afterslip, while overall landward movements for the viscoelastic stress relaxation. General agreement between the computed horizontal movements and the GPS/Acoustic observations demonstrates that the postseismic curious offshore crustal movements can be ascribed to the combined effect of afterslip on a high-angle downdip extension of the main rupture and viscoelastic stress relaxation in the asthenosphere.
Iqbal, Zafar; Alsudir, Samar; Miah, Musharraf; Lai, Edward P C
2011-08-01
Hazardous compounds and bacteria in water have an adverse impact on human health and environmental ecology. Polydopamine (or polypyrrole)-coated magnetic nanoparticles and polymethacrylic acid-co-ethylene glycol dimethacrylate submicron particles were investigated for their fast binding kinetics with bisphenol A, proflavine, naphthalene acetic acid, and Escherichia coli. A new method was developed for the rapid determination of % binding by sequential injection of particles first and compounds (or E. coli) next into a fused-silica capillary for overlap binding during electrophoretic migration. Only nanolitre volumes of compounds and particles were sufficient to complete a rapid binding test. After heterogeneous binding, separation of the compounds from the particles was afforded by capillary electrophoresis. % binding was influenced by applied voltage but not current flow. In-capillary coating of particles affected the % binding of compounds. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Shen, J Q; Ji, Q; Ding, W J; Xia, L M; Wei, L; Wang, C S
2018-03-13
Objective: To evaluate in-hospital and mid-term outcomes of sequential versus separate grafting of in situ skeletonized left internal mammary artery (LIMA) to the left coronary system in a single-center, propensity-matched study. Methods: After propensity score matching, 120 pairs of patients undergoing first, scheduled, isolated coronary artery bypass grafting (CABG) with in situ skeletonized LIMA grafting to the left anterior descending artery (LAD) territory were entered into a sequential group (sequential grafting of LIMA to the diagonal artery and then to the LAD) or a control group (separate grafting of LIMA to the LAD). The in-hospital and follow-up clinical outcomes and follow-up LIMA graft patency were compared. Results: The two propensity score-matched groups had similar in-hospital and follow-up clinical outcomes. The number of bypass conduits ranged from 3 to 6 (with a mean of 3.5), and 91.3%(219/240)of the included patients received off-pump CABG surgery. No significant differences were found between the two propensity score-matched groups in the in-hospital outcomes, including in-hospital death and the incidence of complications associated with CABG (prolonged ventilation, peroperative stroke, re-operation before discharge, and deep sternal wound infection). During follow-up, 9 patients (4 patients from the sequential group and 5 patients from the control group) died, and the all-cause mortality rate was 3.9%. No significant difference was found in the all-cause mortality rate between the 2 groups[3.4% (4/116) vs 4.3% (5/115), P =0.748]. During follow-up period, 99.1% (115/116) patency for the diagonal site and 98.3% (114/116) for the LAD site were determined by coronary computed tomographic angiography after sequential LIMA grafting, both of which were similar with graft patency of separate grafting of in situ skeletonized LIMA to the LAD. Conclusions: Revascularization of the left coronary system using a skeletonized LIMA resulted in excellent in-hospital and mid-term clinical outcomes and graft patency using sequential grafting.
Computer, Video, and Rapid-Cycling Plant Projects in an Undergraduate Plant Breeding Course.
ERIC Educational Resources Information Center
Michaels, T. E.
1993-01-01
Studies the perceived effectiveness of four student projects involving videotape production, computer conferencing, microcomputer simulation, and rapid-cycling Brassica breeding for undergraduate plant breeding students in two course offerings in consecutive years. Linking of the computer conferencing and video projects improved the rating of the…
Paralex: An Environment for Parallel Programming in Distributed Systems
1991-12-07
distributed systems is coni- parable to assembly language programming for traditional sequential systems - the user must resort to low-level primitives ...to accomplish data encoding/decoding, communication, remote exe- cution, synchronization , failure detection and recovery. It is our belief that... synchronization . Finally, composing parallel programs by interconnecting se- quential computations allows automatic support for heterogeneity and fault tolerance
Research on Automatic Programming
1975-12-31
Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is
LUNSORT list of lunar orbiter data by LAC area
NASA Technical Reports Server (NTRS)
Hixon, S.
1976-01-01
Lunar orbiter (missions 1-5) photographic data are listed sequentially according to the number (1 to 147) LAC (Lunar Aeronautical Chart) areas by use of a computer program called LUNSORT. This listing, as well as a similar one from Apollo would simplify the task of identifying images of a given Lunar area. Instructions and sample case are included.
ERIC Educational Resources Information Center
Kolodny, Oren; Lotem, Arnon; Edelman, Shimon
2015-01-01
We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given…
ERIC Educational Resources Information Center
Shukor, Nurbiha A.; Tasir, Zaidatun; Van der Meijden, Henny; Harun, Jamalludin
2014-01-01
Online collaborative learning allows discussion to occur at greater depth where knowledge can be constructed remotely. However students were found to construct knowledge at low-level where they discussed by sharing and comparing opinions; those are inadequate for new knowledge creation. As such, this research attempted to investigate the students'…
Sequential Analysis of Mastery Behavior in 6- and 12-Month-Old Infants.
ERIC Educational Resources Information Center
MacTurk, Robert H.; And Others
1987-01-01
Sequences of mastery behavior were analyzed in a sample of 67 infants 6 to 12 months old. Authors computed (a) frequencies of six categories of mastery behavior, transitional probabilities, and z scores for each behavior change, and (b) transitions from a mastery behavior to positive affect. Changes in frequencies and similarity in organization…
FORTRAN IV Program to Determine the Proper Sequence of Records in a Datafile
ERIC Educational Resources Information Center
Jones, Michael P.; Yoshida, Roland K.
1975-01-01
This FORTRAN IV program executes an essential editing procedure which determines whether a datafile contains an equal number of records (cards) per case which are also in the intended sequential order. The program which requires very little background in computer programming is designed primarily for the user of packaged statistical procedures.…
ERIC Educational Resources Information Center
Chan, Wai
2007-01-01
In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…
Boundary and object detection in real world images. [by means of algorithms
NASA Technical Reports Server (NTRS)
Yakimovsky, Y.
1974-01-01
A solution to the problem of automatic location of objects in digital pictures by computer is presented. A self-scaling local edge detector which can be applied in parallel on a picture is described. Clustering algorithms and boundary following algorithms which are sequential in nature process the edge data to locate images of objects.
EXSPRT: An Expert Systems Approach to Computer-Based Adaptive Testing.
ERIC Educational Resources Information Center
Frick, Theodore W.; And Others
Expert systems can be used to aid decision making. A computerized adaptive test (CAT) is one kind of expert system, although it is not commonly recognized as such. A new approach, termed EXSPRT, was devised that combines expert systems reasoning and sequential probability ratio test stopping rules. EXSPRT-R uses random selection of test items,…
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
Librarian of the Year 2009: Team Cedar Rapids
ERIC Educational Resources Information Center
Berry, John N., III
2009-01-01
When flood came to Cedar Rapids city, the Cedar Rapids Public Library (CRPL), IA, lost 160,000 items including large parts of its adult and youth collections, magazines, newspapers, reference materials, CDs, and DVDs. Most of its public access computers were destroyed as was its computer lab and microfilm equipment. The automatic circulation and…
Potential for leaching of arsenic from excavated rock after different drying treatments.
Li, Jining; Kosugi, Tomoya; Riya, Shohei; Hashimoto, Yohey; Hou, Hong; Terada, Akihiko; Hosomi, Masaaki
2016-07-01
Leaching of arsenic (As) from excavated rock subjected to different drying methods is compared using sequential leaching tests and rapid small-scale column tests combined with a sequential extraction procedure. Although the total As content in the rock was low (8.81 mg kg(-1)), its resulting concentration in the leachate when leached at a liquid-to-solid ratio of 10 L kg(-1) exceeded the environmental standard (10 μg L(-1)). As existed mainly in dissolved forms in the leachates. All of the drying procedures applied in this study increased the leaching of As, with freeze-drying leading to the largest increase. Water extraction of As using the two tests showed different leaching behaviors as a function of the liquid-to-solid ratio, and achieved average extractions of up to 35.7% and 25.8% total As, respectively. Dissolution of As from the mineral surfaces and subsequent re-adsorption controlled the short-term release of As; dissolution of Fe, Al, and dissolved organic carbon played important roles in long-term As leaching. Results of the sequential extraction procedure showed that use of 0.05 M (NH4)2SO4 underestimates the readily soluble As. Long-term water extraction removed almost all of the non-specifically sorbed As and most of the specifically sorbed As. The concept of pollution potential indices, which are easily determined by the sequential leaching test, is proposed in this study and is considered for possible use in assessing efficacy of treatment of excavated rocks. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sequential divergence and the multiplicative origin of community diversity
Hood, Glen R.; Forbes, Andrew A.; Powell, Thomas H. Q.; Egan, Scott P.; Hamerlinck, Gabriela; Smith, James J.; Feder, Jeffrey L.
2015-01-01
Phenotypic and genetic variation in one species can influence the composition of interacting organisms within communities and across ecosystems. As a result, the divergence of one species may not be an isolated process, as the origin of one taxon could create new niche opportunities for other species to exploit, leading to the genesis of many new taxa in a process termed “sequential divergence.” Here, we test for such a multiplicative effect of sequential divergence in a community of host-specific parasitoid wasps, Diachasma alloeum, Utetes canaliculatus, and Diachasmimorpha mellea (Hymenoptera: Braconidae), that attack Rhagoletis pomonella fruit flies (Diptera: Tephritidae). Flies in the R. pomonella species complex radiated by sympatrically shifting and ecologically adapting to new host plants, the most recent example being the apple-infesting host race of R. pomonella formed via a host plant shift from hawthorn-infesting flies within the last 160 y. Using population genetics, field-based behavioral observations, host fruit odor discrimination assays, and analyses of life history timing, we show that the same host-related ecological selection pressures that differentially adapt and reproductively isolate Rhagoletis to their respective host plants (host-associated differences in the timing of adult eclosion, host fruit odor preference and avoidance behaviors, and mating site fidelity) cascade through the ecosystem and induce host-associated genetic divergence for each of the three members of the parasitoid community. Thus, divergent selection at lower trophic levels can potentially multiplicatively and rapidly amplify biodiversity at higher levels on an ecological time scale, which may sequentially contribute to the rich diversity of life. PMID:26499247
Analysis, Mining and Visualization Service at NCSA
NASA Astrophysics Data System (ADS)
Wilhelmson, R.; Cox, D.; Welge, M.
2004-12-01
NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.
Durability of Adherence to Antiretroviral Therapy on Initial and Subsequent Regimens
GARDNER, EDWARD M.; BURMAN, WILLIAM J.; MARAVI, MOISES E.; DAVIDSON, ARTHUR J.
2007-01-01
There is uncertainty regarding the durability of adherence to antiretroviral therapy. This study is a retrospective review of previously antiretroviral naïve patients initiating therapy between 1997 and 2002. Antiretroviral adherence was calculated using prescription refill data and was analyzed over time on an initial regimen and on sequential antiretroviral regimens. Three hundred forty-four patients were included. The median lengths of the first, second, and third regimens were stable at 1.7 years, 1.2 years, and 1.5 years, respectively (p = 0.10). In multivariate analysis the factor most significantly associated with earlier initial regimen termination was poor adherence. On an initial regimen, adherence decreased over time and declined most rapidly in patients with the shortest regimens (4 to <16 months, −43% per year), followed by patients with intermediate regimen duration (16 to <28 months, −19% per year), and then patients with longer regimens (≥28 months, −5% per year). In patients progressing to a third regimen, there was a trend toward decreasing adherence over successive regimens. In conclusion, sequential antiretroviral regimens are of similar lengths, with adherence being highly associated with first regimen duration. Adherence decreases during an initial regimen and on sequential antiretroviral regimens. Effective and durable interventions to prevent declining adherence are needed. PMID:16987049
Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.
Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W
2016-12-01
We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Coupling UV-H2O2 to accelerate dimethyl phthalate (DMP) biodegradation and oxidation.
Chen, Bin; Song, Jiaxiu; Yang, Lihui; Bai, Qi; Li, Rongjie; Zhang, Yongming; Rittmann, Bruce E
2015-11-01
Dimethyl phthalate (DMP), an important industrial raw material, is an endocrine disruptor of concern for human and environmental health. DMP exhibits slow biodegradation, and its coupled treatment by means of advanced oxidation may enhance its biotransformation and mineralization. We evaluated two ways of coupling UV-H2O2 advanced oxidation to biodegradation: sequential coupling and intimate coupling in an internal circulation baffled biofilm reactor (ICBBR). During sequential coupling, UV-H2O2 pretreatment generated carboxylic acids that depressed the pH, and subsequent biodegradation generated phthalic acid; both factors inhibited DMP biodegradation. During intimately coupled UV-H2O2 with biodegradation, carboxylic acids and phthalic acid (PA) did not accumulate, and the biodegradation rate was 13 % faster than with biodegradation alone and 78 % faster than with biodegradation after UV-H2O2 pretreatment. Similarly, DMP oxidation with intimate coupling increased by 5 and 39 %, respectively, compared with biodegradation alone and sequential coupling. The enhancement effects during intimate coupling can be attributed to the rapid catabolism of carboxylic acids, which generated intracellular electron carriers that directly accelerated di-oxygenation of PA and relieved the inhibition effect of PA and low pH. Thus, intimate coupling optimized the impacts of energy input from UV irradiation used together with biodegradation.
Tang, Xiao; Sun, Bing; He, Hangyong; Li, Hui; Hu, Bin; Qiu, Zewu; Li, Jie; Zhang, Chunyan; Hou, Shengcai; Tong, Zhaohui; Dai, Huaping
2015-11-01
Paraquat is a widely used herbicide that can cause severe to fatal poisoning in humans. The irreversible and rapid progression of pulmonary fibrosis associated with respiratory failure is the main cause of death in the later stages of poisoning. There are infrequent reports of successful lung transplants for cases of severe paraquat poisoning. We expect that this successful case will provide a reference for other patients in similar circumstances. A 24-year-old female was sent to the hospital approximately 2 hours after ingesting 50 ml of paraquat. She experienced rapidly aggravated pulmonary fibrosis and severe respiratory failure. On the 34th day after ingestion, she underwent intubation and invasive mechanical ventilation. The patient was evaluated for lung transplantation, and veno-venous extracorporeal membrane oxygenation (ECMO) was established as a bridge to lung transplantation on the 44th day. On the 56th day, she successfully underwent a bilateral sequential lung transplantation. Through respiratory and physical rehabilitation and nutrition support, the patient was weaned from mechanical ventilation and extubated on the 66th day. On the 80th day, she was discharged. During the 1-year follow-up, the patient was found to be in good condition, and her pulmonary function improved gradually. We suggest that lung transplantation may be an effective treatment in the end stages of paraquat-induced pulmonary fibrosis and consequential respiratory failure. For patients experiencing a rapid progression to a critical condition in whom lung transplantation cannot be performed immediately (e.g., while awaiting a viable donor or toxicant clearance), ECMO should be a viable bridge to lung transplantation.
Tayade, Amol B; Dhar, Priyanka; Kumar, Jatinder; Sharma, Manu; Chaurasia, Om P; Srivastava, Ravi B
2013-07-30
A rapid method was developed to determine both types of vitamins in Rhodiola imbricata root for the accurate quantification of free vitamin forms. Rapid resolution liquid chromatography/tandem mass spectrometry (RRLC-MS/MS) with electrospray ionization (ESI) source operating in multiple reactions monitoring (MRM) mode was optimized for the sequential analysis of nine water-soluble vitamins (B1, B2, two B3 vitamins, B5, B6, B7, B9, and B12) and six fat-soluble vitamins (A, E, D2, D3, K1, and K2). Both types of vitamins were separated by ion-suppression reversed-phase liquid chromatography with gradient elution within 30 min and detected in positive ion mode. Deviations in the intra- and inter-day precision were always below 0.6% and 0.3% for recoveries and retention time. Intra- and inter-day relative standard deviation (RSD) values of retention time for water- and fat-soluble vitamin were ranged between 0.02-0.20% and 0.01-0.15%, respectively. The mean recoveries were ranged between 88.95 and 107.07%. Sensitivity and specificity of this method allowed the limits of detection (LOD) and limits of quantitation (LOQ) of the analytes at ppb levels. The linear range was achieved for fat- and water-soluble vitamins at 100-1000 ppb and 10-100 ppb. Vitamin B-complex and vitamin E were detected as the principle vitamins in the root of this adaptogen which would be of great interest to develop novel foods from the Indian trans-Himalaya. Copyright © 2013 Elsevier B.V. All rights reserved.
Kim, Y S; Kim, S J; Yoon, J H; Suk, K T; Kim, J B; Kim, D J; Kim, D Y; Min, H J; Park, S H; Shin, W G; Kim, K H; Kim, H Y; Baik, G H
2011-11-01
The eradication rates of Helicobacter pylori (H. pylori) using a proton pump inhibitor (PPI)-based triple therapy have declined due to antibiotic resistance worldwide. To compare the eradication rate of the 10-day sequential therapy for H. pylori infection with that of the 14-day standard PPI-based triple therapy. This was a prospective, randomised, controlled study. A total of 409 patients with H. pylori infection were randomly assigned to receive either the 10-day sequential therapy regimen, which consisted of pantoprazole (40 mg) plus amoxicillin (1000 mg) twice a day for 5 days, then pantoprazole (40 mg) with clarithromycin (500 mg) and metronidazole (500 mg) twice a day for another five consecutive days or the 14-day PPI-based triple therapy regimen, which consisted of pantoprazole (40 mg) with amoxicillin (1000 mg) and clarithromycin (500 mg) twice a day for 14 days. The pre- and post-treatment H. pylori status were assessed by rapid urease test, urea breath test, or histology. Successful eradication was confirmed at least 4 weeks after finishing the treatment. In the intention-to-treat analysis, the eradication rates of the 10-day sequential therapy and of the 14-day PPI-based triple therapy were 85.9% (176/205) and 75.0% (153/205), respectively (P = 0.006). In the per-protocol analysis, the eradication rates were 92.6% (175/205) and 85% (153/204), respectively (P = 0.019). There was no statistically significant difference between the two investigated groups regarding the occurrence of adverse event rates (18.9% vs. 13.3%, P = 0.143). The 10-day sequential therapy achieved significantly higher eradication rates than the 14-day standard PPI-based triple therapy in Korea. © 2011 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Bosart, L. F.; Cordeira, J. M.; Archambault, H. M.; Moore, B. J.
2014-12-01
A case of four sequentially linked extreme weather events (EWEs) during 22 - 31 October 2007 which included wildfires in southern California, cold surges in northern and eastern Mexico, widespread heavy rain in the eastern United Sates, and heavy rains in southern Mexico is presented. These EWEs were preceded by a rapid dynamically driven rapid amplification of the upper-level flow across the North Pacific and North America associated with the formation of a large-amplitude Rossby wave train (RWT) through downstream baroclinic development involving multiple tropical and polar disturbance interactions with the North Pacific jet stream. The primary contributors to the formation of the large-amplitude RWT were two sequential upper-level polar disturbances, a diabatic Rossby vortex, western North Pacific TC Kajiki, and migratory extratropical cyclones (ECs). Deep subtropical and tropical moisture plumes resembling "atmospheric rivers" drawn poleward along warm conveyor belts into the warm sectors of these ECs played a critical role in further amplifying the downstream upper-level ridges based on an Eulerian analysis of negative potential vorticity advection by the irrotational wind and a Lagrangian trajectory analysis of tropical and subtropical moisture sources. In particular, these atmospheric rivers extending poleward from TC Kajiki and from the subtropical eastern North Pacific into the warm sectors of polar disturbance-generated ECs over the western and eastern North Pacific, respectively, bolstered latent heat release and ridge building and contributed to additional upper-level flow amplification. The EWEs occurred subsequent to anticyclonic wave breaking over western North America and the concomitant downstream formation of a meridionally elongated potential vorticity streamer over the central United States. The resulting high-amplitude flow pattern over North America favored the formation of the aforementioned EWEs by promoting an extensive meridional exchange of air masses from high and low latitudes.
Yamada, Shigehito; Uwabe, Chigako; Nakatsu-Komatsu, Tomoko; Minekura, Yutaka; Iwakura, Masaji; Motoki, Tamaki; Nishimiya, Kazuhiko; Iiyama, Masaaki; Kakusho, Koh; Minoh, Michihiko; Mizuta, Shinobu; Matsuda, Tetsuya; Matsuda, Yoshimasa; Haishi, Tomoyuki; Kose, Katsumi; Fujii, Shingo; Shiota, Kohei
2006-02-01
Morphogenesis in the developing embryo takes place in three dimensions, and in addition, the dimension of time is another important factor in development. Therefore, the presentation of sequential morphological changes occurring in the embryo (4D visualization) is essential for understanding the complex morphogenetic events and the underlying mechanisms. Until recently, 3D visualization of embryonic structures was possible only by reconstruction from serial histological sections, which was tedious and time-consuming. During the past two decades, 3D imaging techniques have made significant advances thanks to the progress in imaging and computer technologies, computer graphics, and other related techniques. Such novel tools have enabled precise visualization of the 3D topology of embryonic structures and to demonstrate spatiotemporal 4D sequences of organogenesis. Here, we describe a project in which staged human embryos are imaged by the magnetic resonance (MR) microscope, and 3D images of embryos and their organs at each developmental stage were reconstructed based on the MR data, with the aid of computer graphics techniques. On the basis of the 3D models of staged human embryos, we constructed a data set of 3D images of human embryos and made movies to illustrate the sequential process of human morphogenesis. Furthermore, a computer-based self-learning program of human embryology is being developed for educational purposes, using the photographs, histological sections, MR images, and 3D models of staged human embryos. Copyright 2005 Wiley-Liss, Inc.
TU-E-BRB-08: Dual Gated Volumetric Modulated Arc Therapy.
Wu, J; Fahimian, B; Wu, H; Xing, L
2012-06-01
Gated Volumetric Modulated Arc Therapy (VMAT) is an emerging treatment modality for Stereotactic Body Radiotherapy (SBRT). However, gating significantly prolongs treatment time. In order to enhance treatment efficiency, a novel dual gated VMAT, in which dynamic arc deliveries are executed sequentially in alternating exhale and inhale phases, is proposed and evaluated experimentally. The essence of dual gated VMAT is to take advantage of the natural pauses that occur at inspiration and exhalation by alternatively delivering the dose at the two phases, instead of the exhale window only. The arc deliveries at the two phases are realized by rotating gantry forward at the exhale window and backward at the inhale in an alternative fashion. Custom XML scripts were developed in Varian's TrueBeam STx Developer Mode to enable dual gated VMAT delivery. RapidArc plans for a lung case were generated for both inhale and exhale phases. The two plans were then combined into a dual gated arc by interleaving the arc treatment nodes of the two RapidArc plans. The dual gated plan was delivered in the development mode of TrueBeam LINAC onto a motion phantom and the delivery was measured by using pinpoint chamber/film/diode array (delta 4). The measured dose distribution was compared with that computed using Eclipse AAA algorithm. The treatment delivery time was recorded and compared with the corresponding single gated plans. Relative to the corresponding single gated delivery, it was found that treatment time efficiency was improved by 95.5% for the case studied here. Pinpoint chamber absolute dose measurement agreed the calculation to within 0.7%. Diode chamber array measurements revealed that 97.5% of measurement points of dual gated RapidArc delivery passed the 3% and 3mm gamma-test criterion. A dual gated VMAT treatment has been developed and implemented successfully with nearly doubled treatment delivery efficiency. © 2012 American Association of Physicists in Medicine.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
Mean-field crack networks on desiccated films and their applications: Girl with a Pearl Earring.
Flores, J C
2017-02-15
Usual requirements for bulk and fissure energies are considered in obtaining the interdependence among external stress, thickness and area of crack polygons in desiccated films. The average area of crack polygons increases with thickness as a power-law of 4/3. The sequential fragmentation process is characterized by a topological factor related to a scaling finite procedure. Non-sequential overly tensioned (prompt) fragmentation is briefly discussed. Vermeer's painting, Girl with a Pearl Earring, is considered explicitly by using computational image tools and simple experiments and applying the proposed theoretical analysis. In particular, concerning the source of lightened effects on the girl's face, the left/right thickness layer ratio (≈1.34) and the stress ratio (≈1.102) are evaluated. Other master paintings are briefly considered.
Dancing Twins: Stellar Hierarchies That Formed Sequentially?
NASA Astrophysics Data System (ADS)
Tokovinin, Andrei
2018-04-01
This paper draws attention to the class of resolved triple stars with moderate ratios of inner and outer periods (possibly in a mean motion resonance) and nearly circular, mutually aligned orbits. Moreover, stars in the inner pair are twins with almost identical masses, while the mass sum of the inner pair is comparable to the mass of the outer component. Such systems could be formed either sequentially (inside-out) by disk fragmentation with subsequent accretion and migration, or by a cascade hierarchical fragmentation of a rotating cloud. Orbits of the outer and inner subsystems are computed or updated in four such hierarchies: LHS 1070 (GJ 2005, periods 77.6 and 17.25 years), HIP 9497 (80 and 14.4 years), HIP 25240 (1200 and 47.0 years), and HIP 78842 (131 and 10.5 years).
Hybrid and concatenated coding applications.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Odenwalder, J. P.
1972-01-01
Results of a study to evaluate the performance and implementation complexity of a concatenated and a hybrid coding system for moderate-speed deep-space applications. It is shown that with a total complexity of less than three times that of the basic Viterbi decoder, concatenated coding improves a constraint length 8 rate 1/3 Viterbi decoding system by 1.1 and 2.6 dB at bit error probabilities of 0.0001 and one hundred millionth, respectively. With a somewhat greater total complexity, the hybrid coding system is shown to obtain a 0.9-dB computational performance improvement over the basic rate 1/3 sequential decoding system. Although substantial, these complexities are much less than those required to achieve the same performances with more complex Viterbi or sequential decoder systems.
NASA Technical Reports Server (NTRS)
Massey, J. L.
1976-01-01
Virtually all previously-suggested rate 1/2 binary convolutional codes with KE = 24 are compared. Their distance properties are given; and their performance, both in computation and in error probability, with sequential decoding on the deep-space channel is determined by simulation. Recommendations are made both for the choice of a specific KE = 24 code as well as for codes to be included in future coding standards for the deep-space channel. A new result given in this report is a method for determining the statistical significance of error probability data when the error probability is so small that it is not feasible to perform enough decoding simulations to obtain more than a very small number of decoding errors.
Domain Decomposition with Local Mesh Refinement.
1989-08-01
smoothi coefficients, or non-smooth solui ioni,. We eiriplov fromn 1 to 1024 tiles on problems containing irp to 161K (degrees of freedom. Though io... methodology survives such compromises and is even sequentially advantageous in many problems. The domain decomposition algorithms we employ (sertiun 3...iog( I + !J2 it - g i Ol Qunit squiare 1 he (,mai oive i> Hie outward normal. lfie sevoh iih exam pie, from [1. 27] has a smoothi solution, but rapidlY
Computer Model for Sizing Rapid Transit Tunnel Diameters
DOT National Transportation Integrated Search
1976-01-01
A computer program was developed to assist the determination of minimum tunnel diameters for electrified rapid transit systems. Inputs include vehicle shape, walkway location, clearances, and track geometrics. The program written in FORTRAN IV calcul...
Parallelization of a Fully-Distributed Hydrologic Model using Sub-basin Partitioning
NASA Astrophysics Data System (ADS)
Vivoni, E. R.; Mniszewski, S.; Fasel, P.; Springer, E.; Ivanov, V. Y.; Bras, R. L.
2005-12-01
A primary obstacle towards advances in watershed simulations has been the limited computational capacity available to most models. The growing trend of model complexity, data availability and physical representation has not been matched by adequate developments in computational efficiency. This situation has created a serious bottleneck which limits existing distributed hydrologic models to small domains and short simulations. In this study, we present novel developments in the parallelization of a fully-distributed hydrologic model. Our work is based on the TIN-based Real-time Integrated Basin Simulator (tRIBS), which provides continuous hydrologic simulation using a multiple resolution representation of complex terrain based on a triangulated irregular network (TIN). While the use of TINs reduces computational demand, the sequential version of the model is currently limited over large basins (>10,000 km2) and long simulation periods (>1 year). To address this, a parallel MPI-based version of the tRIBS model has been implemented and tested using high performance computing resources at Los Alamos National Laboratory. Our approach utilizes domain decomposition based on sub-basin partitioning of the watershed. A stream reach graph based on the channel network structure is used to guide the sub-basin partitioning. Individual sub-basins or sub-graphs of sub-basins are assigned to separate processors to carry out internal hydrologic computations (e.g. rainfall-runoff transformation). Routed streamflow from each sub-basin forms the major hydrologic data exchange along the stream reach graph. Individual sub-basins also share subsurface hydrologic fluxes across adjacent boundaries. We demonstrate how the sub-basin partitioning provides computational feasibility and efficiency for a set of test watersheds in northeastern Oklahoma. We compare the performance of the sequential and parallelized versions to highlight the efficiency gained as the number of processors increases. We also discuss how the coupled use of TINs and parallel processing can lead to feasible long-term simulations in regional watersheds while preserving basin properties at high-resolution.
MAAMD: a workflow to standardize meta-analyses and comparison of affymetrix microarray data
2014-01-01
Background Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix CEL files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. Results We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila. MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons. MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. Conclusions MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons. PMID:24621103
Rapid Prototyping of Computer-Based Presentations Using NEAT, Version 1.1.
ERIC Educational Resources Information Center
Muldner, Tomasz
NEAT (iNtegrated Environment for Authoring in ToolBook) provides templates and various facilities for the rapid prototyping of computer-based presentations, a capability that is lacking in current authoring systems. NEAT is a specialized authoring system that can be used by authors who have a limited knowledge of computer systems and no…
Philip A. Araman
1977-01-01
The design of a rough mill for the production of interior furniture parts is used to illustrate a simulation technique for analyzing and evaluating established and proposed sequential production systems. Distributions representing the real-world random characteristics of lumber, equipment feed speeds and delay times are programmed into the simulation. An example is...
Designing Robust and Resilient Tactical MANETs
2014-09-25
Bounds on the Throughput Efficiency of Greedy Maximal Scheduling in Wireless Networks , IEEE/ACM Transactions on Networking , (06 2011): 0. doi: N... Wireless Sensor Networks and Effects of Long Range Dependant Data, Special IWSM Issue of Sequential Analysis, (11 2012): 0. doi: A. D. Dominguez...Bushnell, R. Poovendran. A Convex Optimization Approach for Clone Detection in Wireless Sensor Networks , Pervasive and Mobile Computing, (01 2012
Asymmetric Synthesis of Spiropyrazolones by Sequential Organo- and Silver Catalysis
Hack, Daniel; Dürr, Alexander B; Deckers, Kristina; Chauhan, Pankaj; Seling, Nico; Rübenach, Lukas; Mertens, Lucas; Raabe, Gerhard; Schoenebeck, Franziska; Enders, Dieter
2016-01-01
A stereoselective one-pot synthesis of spiropyrazolones through an organocatalytic asymmetric Michael addition and a formal Conia-ene reaction has been developed. Depending on the nitroalkene, the 5-exo-dig-cyclization could be achieved by silver-catalyzed alkyne activation or by oxidation of the intermediate enolate. The mechanistic pathways have been investigated using computational chemistry and mechanistic experiments. PMID:26676875
ERIC Educational Resources Information Center
Cheng, Kun-Hung; Tsai, Chin-Chung
2016-01-01
Following a previous study (Cheng & Tsai, 2014. "Computers & Education"), this study aimed to probe the interaction of child-parent shared reading with the augmented reality (AR) picture book in more depth. A series of sequential analyses were thus conducted to infer the behavioral transition diagrams and visualize the continuity…
ERIC Educational Resources Information Center
Morrison, James L.
A computerized delivery system in consumer economics developed at the University of Delaware uses the PLATO system to provide a basis for analyzing consumer behavior in the marketplace. The 16 sequential lessons, part of the Consumer in the Marketplace Series (CMS), demonstrate consumer economic theory in layman's terms and are structured to focus…
Academic Growth Expectations for Students with Emotional and Behavior Disorders
ERIC Educational Resources Information Center
Ysseldyke, Jim; Scerra, Carmine; Stickney, Eric; Beckler, Amanda; Dituri, Joan; Ellis, Karen
2017-01-01
Computer adaptive assessments were used to monitor the academic status and growth of students with emotional behavior disorders (EBD) in reading (N = 321) and math (N = 322) in a regional service center serving 56 school districts. A cohort sequential model was used to compare that performance to the status and growth of a national user base of…
ERIC Educational Resources Information Center
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
ERIC Educational Resources Information Center
Aragón, Sonia; Lapresa, Daniel; Arana, Javier; Anguera, M. Teresa; Garzón, Belén
2017-01-01
Polar coordinate analysis is a powerful data reduction technique based on the Zsum statistic, which is calculated from adjusted residuals obtained by lag sequential analysis. Its use has been greatly simplified since the addition of a module in the free software program HOISAN for performing the necessary computations and producing…
Decoding-Accuracy-Based Sequential Dimensionality Reduction of Spatio-Temporal Neural Activities
NASA Astrophysics Data System (ADS)
Funamizu, Akihiro; Kanzaki, Ryohei; Takahashi, Hirokazu
Performance of a brain machine interface (BMI) critically depends on selection of input data because information embedded in the neural activities is highly redundant. In addition, properly selected input data with a reduced dimension leads to improvement of decoding generalization ability and decrease of computational efforts, both of which are significant advantages for the clinical applications. In the present paper, we propose an algorithm of sequential dimensionality reduction (SDR) that effectively extracts motor/sensory related spatio-temporal neural activities. The algorithm gradually reduces input data dimension by dropping neural data spatio-temporally so as not to undermine the decoding accuracy as far as possible. Support vector machine (SVM) was used as the decoder, and tone-induced neural activities in rat auditory cortices were decoded into the test tone frequencies. SDR reduced the input data dimension to a quarter and significantly improved the accuracy of decoding of novel data. Moreover, spatio-temporal neural activity patterns selected by SDR resulted in significantly higher accuracy than high spike rate patterns or conventionally used spatial patterns. These results suggest that the proposed algorithm can improve the generalization ability and decrease the computational effort of decoding.
NASA Astrophysics Data System (ADS)
Kopka, P.; Wawrzynczak, A.; Borysiewicz, M.
2015-09-01
In many areas of application, a central problem is a solution to the inverse problem, especially estimation of the unknown model parameters to model the underlying dynamics of a physical system precisely. In this situation, the Bayesian inference is a powerful tool to combine observed data with prior knowledge to gain the probability distribution of searched parameters. We have applied the modern methodology named Sequential Approximate Bayesian Computation (S-ABC) to the problem of tracing the atmospheric contaminant source. The ABC is technique commonly used in the Bayesian analysis of complex models and dynamic system. Sequential methods can significantly increase the efficiency of the ABC. In the presented algorithm, the input data are the on-line arriving concentrations of released substance registered by distributed sensor network from OVER-LAND ATMOSPHERIC DISPERSION (OLAD) experiment. The algorithm output are the probability distributions of a contamination source parameters i.e. its particular location, release rate, speed and direction of the movement, start time and duration. The stochastic approach presented in this paper is completely general and can be used in other fields where the parameters of the model bet fitted to the observable data should be found.
Scalable Parallel Density-based Clustering and Applications
NASA Astrophysics Data System (ADS)
Patwary, Mostofa Ali
2014-04-01
Recently, density-based clustering algorithms (DBSCAN and OPTICS) have gotten significant attention of the scientific community due to their unique capability of discovering arbitrary shaped clusters and eliminating noise data. These algorithms have several applications, which require high performance computing, including finding halos and subhalos (clusters) from massive cosmology data in astrophysics, analyzing satellite images, X-ray crystallography, and anomaly detection. However, parallelization of these algorithms are extremely challenging as they exhibit inherent sequential data access order, unbalanced workload resulting in low parallel efficiency. To break the data access sequentiality and to achieve high parallelism, we develop new parallel algorithms, both for DBSCAN and OPTICS, designed using graph algorithmic techniques. For example, our parallel DBSCAN algorithm exploits the similarities between DBSCAN and computing connected components. Using datasets containing up to a billion floating point numbers, we show that our parallel density-based clustering algorithms significantly outperform the existing algorithms, achieving speedups up to 27.5 on 40 cores on shared memory architecture and speedups up to 5,765 using 8,192 cores on distributed memory architecture. In our experiments, we found that while achieving the scalability, our algorithms produce clustering results with comparable quality to the classical algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Remembrance of inferences past: Amortization in human hypothesis generation.
Dasgupta, Ishita; Schulz, Eric; Goodman, Noah D; Gershman, Samuel J
2018-05-21
Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference. Copyright © 2018 Elsevier B.V. All rights reserved.
Tuning the Brake While Raising the Stake: Network Dynamics during Sequential Decision-Making.
Meder, David; Haagensen, Brian Numelin; Hulme, Oliver; Morville, Tobias; Gelskov, Sofie; Herz, Damian Marc; Diomsina, Beata; Christensen, Mark Schram; Madsen, Kristoffer Hougaard; Siebner, Hartwig Roman
2016-05-11
When gathering valued goods, risk and reward are often coupled and escalate over time, for instance, during foraging, trading, or gambling. This escalating frame requires agents to continuously balance expectations of reward against those of risk. To address how the human brain dynamically computes these tradeoffs, we performed whole-brain fMRI while healthy young individuals engaged in a sequential gambling task. Participants were repeatedly confronted with the option to continue with throwing a die to accumulate monetary reward under escalating risk, or the alternative option to stop to bank the current balance. Within each gambling round, the accumulation of gains gradually increased reaction times for "continue" choices, indicating growing uncertainty in the decision to continue. Neural activity evoked by "continue" choices was associated with growing activity and connectivity of a cortico-subcortical "braking" network that positively scaled with the accumulated gains, including pre-supplementary motor area (pre-SMA), inferior frontal gyrus, caudate, and subthalamic nucleus (STN). The influence of the STN on continue-evoked activity in the pre-SMA was predicted by interindividual differences in risk-aversion attitudes expressed during the gambling task. Furthermore, activity in dorsal anterior cingulate cortex (ACC) reflected individual choice tendencies by showing increased activation when subjects made nondefault "continue" choices despite an increasing tendency to stop, but ACC activity did not change in proportion with subjective choice uncertainty. Together, the results implicate a key role of dorsal ACC, pre-SMA, inferior frontal gyrus, and STN in computing the trade-off between escalating reward and risk in sequential decision-making. Using a paradigm where subjects experienced increasing potential rewards coupled with increasing risk, this study addressed two unresolved questions in the field of decision-making: First, we investigated an "inhibitory" network of regions that has so far been investigated with externally cued action inhibition. In this study, we show that the dynamics in this network under increasingly risky decisions are predictive of subjects' risk attitudes. Second, we contribute to a currently ongoing debate about the anterior cingulate cortex's role in sequential foraging decisions by showing that its activity is related to making nondefault choices rather than to choice uncertainty. Copyright © 2016 Meder, Haagensen, et al.
Clinical application of a light-pen computer system for quantitative angiography
NASA Technical Reports Server (NTRS)
Alderman, E. L.
1975-01-01
The paper describes an angiographic analysis system which uses a video disk for recording and playback, a light-pen for data input, minicomputer processing, and an electrostatic printer/plotter for hardcopy output. The method is applied to quantitative analysis of ventricular volumes, sequential ventriculography for assessment of physiologic and pharmacologic interventions, analysis of instantaneous time sequence of ventricular systolic and diastolic events, and quantitation of segmental abnormalities. The system is shown to provide the capability for computation of ventricular volumes and other measurements from operator-defined margins by greatly reducing the tedium and errors associated with manual planimetry.
Improved numerical methods for turbulent viscous flows aerothermal modeling program, phase 2
NASA Technical Reports Server (NTRS)
Karki, K. C.; Patankar, S. V.; Runchal, A. K.; Mongia, H. C.
1988-01-01
The details of a study to develop accurate and efficient numerical schemes to predict complex flows are described. In this program, several discretization schemes were evaluated using simple test cases. This assessment led to the selection of three schemes for an in-depth evaluation based on two-dimensional flows. The scheme with the superior overall performance was incorporated in a computer program for three-dimensional flows. To improve the computational efficiency, the selected discretization scheme was combined with a direct solution approach in which the fluid flow equations are solved simultaneously rather than sequentially.
NASA Astrophysics Data System (ADS)
Tan, Maxine; Leader, Joseph K.; Liu, Hong; Zheng, Bin
2015-03-01
We recently investigated a new mammographic image feature based risk factor to predict near-term breast cancer risk after a woman has a negative mammographic screening. We hypothesized that unlike the conventional epidemiology-based long-term (or lifetime) risk factors, the mammographic image feature based risk factor value will increase as the time lag between the negative and positive mammography screening decreases. The purpose of this study is to test this hypothesis. From a large and diverse full-field digital mammography (FFDM) image database with 1278 cases, we collected all available sequential FFDM examinations for each case including the "current" and 1 to 3 most recently "prior" examinations. All "prior" examinations were interpreted negative, and "current" ones were either malignant or recalled negative/benign. We computed 92 global mammographic texture and density based features, and included three clinical risk factors (woman's age, family history and subjective breast density BIRADS ratings). On this initial feature set, we applied a fast and accurate Sequential Forward Floating Selection (SFFS) feature selection algorithm to reduce feature dimensionality. The features computed on both mammographic views were individually/ separately trained using two artificial neural network (ANN) classifiers. The classification scores of the two ANNs were then merged with a sequential ANN. The results show that the maximum adjusted odds ratios were 5.59, 7.98, and 15.77 for using the 3rd, 2nd, and 1st "prior" FFDM examinations, respectively, which demonstrates a higher association of mammographic image feature change and an increasing risk trend of developing breast cancer in the near-term after a negative screening.
Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task.
Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C; López, Vladimir; Mery, Domingo; Cosmelli, Diego
2017-01-01
Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Rapid determination of actinides in asphalt samples
Maxwell, Sherrod L.; Culligan, Brian K.; Hutchison, Jay B.
2014-01-12
A new rapid method for the determination of actinides in asphalt samples has been developed that can be used in emergency response situations or for routine analysis If a radiological dispersive device (RDD), Improvised Nuclear Device (IND) or a nuclear accident such as the accident at the Fukushima Nuclear Power Plant in March, 2011 occurs, there will be an urgent need for rapid analyses of many different environmental matrices, including asphalt materials, to support dose mitigation and environmental clean up. The new method for the determination of actinides in asphalt utilizes a rapid furnace step to destroy bitumen and organicsmore » present in the asphalt and sodium hydroxide fusion to digest the remaining sample. Sample preconcentration steps are used to collect the actinides and a new stacked TRU Resin + DGA Resin column method is employed to separate the actinide isotopes in the asphalt samples. The TRU Resin plus DGA Resin separation approach, which allows sequential separation of plutonium, uranium, americium and curium isotopes in asphalt samples, can be applied to soil samples as well.« less
2013-11-27
SECURITY CLASSIFICATION OF: CUBRC has developed an in-line, multi-analyte isolation technology that utilizes solid phase extraction chemistries to purify...goals. Specifically, CUBRC will design and manufacture a prototype cartridge(s) and test the prototype cartridge for its ability to isolate each...display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. CUBRC , Inc. P. O. Box 400 Buffalo, NY 14225 -1955
Natural migration rates of trees: Global terrestrial carbon cycle implications. Book chapter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solomon, A.M.
The paper discusses the forest-ecological processes which constrain the rate of response by forests to rapid future environmental change. It establishes a minimum response time by natural tree populations which invade alien landscapes and reach the status of a mature, closed canopy forest when maximum carbon storage is realized. It considers rare long-distance and frequent short-distance seed transport, seedling and tree establishment, sequential tree and stand maturation, and spread between newly established colonies.
Oxonitriles: a grignard addition-acylation route to enamides.
Fleming, Fraser F; Wei, Guoqing; Zhang, Zhiyu; Steward, Omar W
2006-10-12
[reaction: see text] Sequential addition of three different Grignard reagents and pivaloyl chloride to 3-oxo-1-cyclohexene-1-carbonitrile installs four new bonds to generate a diverse array of cyclic enamides. Remarkably, formation of the C-magnesiated nitrile intermediate is followed by preferential acylation by pivaloyl chloride rather than consumption by an in situ Grignard reagent. Rapid N-acylation of the C-magnesiated nitrile generates an acyl ketenimine that reacts readily with Grignard reagents or a trialkylzincate, effectively assembling highly substituted, cyclic enamides.
Oxonitriles: A Grignard Addition-Acylation Route to Enamides
Wei, Guoqing; Zhang, Zhiyu; Steward, Omar W.
2008-01-01
Sequential addition of three different Grignard reagents and pivaloyl chloride to 3-oxo-1-cyclohexene-1-carbonitrile installs four new bonds to generate a diverse array of cyclic enamides. Remarkably, formation of the C-magnesiated nitrile intermediate is followed by preferential acylation by pivaloyl chloride rather than consumption by in situ Grignard reagent. Rapid N-acylation of the C-magnesiated nitrile generates an acyl ketenimine that reacts readily with Grignard reagents, or a trialkyl zincate, effectively assembling highly substituted, cyclic enamides. PMID:17020332
Remediation of DNAPL through Sequential In Situ Chemical Oxidation and Bioaugmentation
2009-04-01
Specific Electrode Field Field-filtered, ICP - PSC 0.05 mg/L 125 mL plastic nitric acid to pHɚ 28 days cool to 4oC Ion Chromatography 25310 C PSC 0.2...oxidized by MnO2 at a significant rate; however, MnO2 reacted rapidly with oxalic acid ; • Complete dechlorination occurred only in microcosms...controller PLFA phospholipid fatty acid ppb parts per billion PTA pilot test area PVC polyvinyl chloride QAPP quality assurance project plan QA
A Rapid, One-Pot Synthesis of β-Siloxy-α-Haloaldehydes
Saadi, Jakub; Akakura, Matsujiro
2011-01-01
The Mukaiyama cross aldol reaction of α-fluoro-, α-chloro-, and α-bromoacetaldehyde-derived (Z)-tris(trimethylsilyl)- silyl enol ethers furnishing anti-β-siloxy-α-haloaldehydes is described. A highly diastereoselective, one-pot, sequential double aldol process, affording novel β,δ-bissiloxy-α,γ-bishaloaldehydes is developed. Reactions are catalyzed by C6F5CHTf2 and C6F5CTf2AlMe2 (0.5–1.5 mol%) and provide access to halogenated polyketide fragments. PMID:21815682
1994-03-18
Paillard, 1960). The benefit of movement chunks would lie in the associated reduction of storage and retrieval capacity (see e.g. Gallistel , 1980; Jones...1983; Fromkin, 1981; Gallistel , 1980; Zimmer & Korndle, 1988) but interval durations should be affected as well. The notion of sequence-specific...point where they can be made more rapidly and accurately with little variation. Then they become welded together into ’chunks’" ( Gallistel , 1980, p.367
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1991-04-01
The organic scintillation detector response code SCINFUL has been used to compute secondary-particle energy spectra, d{sigma}/dE, following nonelastic neutron interactions with {sup 12}C for incident neutron energies between 15 and 60 MeV. The resulting spectra are compared with published similar spectra computed by Brenner and Prael who used an intranuclear cascade code, including alpha clustering, a particle pickup mechanism, and a theoretical approach to sequential decay via intermediate particle-unstable states. The similarities of and the differences between the results of the two approaches are discussed. 16 refs., 44 figs., 2 tabs.
NASA Astrophysics Data System (ADS)
Liu, GaiYun; Chao, Daniel Yuh
2015-08-01
To date, research on the supervisor design for flexible manufacturing systems focuses on speeding up the computation of optimal (maximally permissive) liveness-enforcing controllers. Recent deadlock prevention policies for systems of simple sequential processes with resources (S3PR) reduce the computation burden by considering only the minimal portion of all first-met bad markings (FBMs). Maximal permissiveness is ensured by not forbidding any live state. This paper proposes a method to further reduce the size of minimal set of FBMs to efficiently solve integer linear programming problems while maintaining maximal permissiveness using a vector-covering approach. This paper improves the previous work and achieves the simplest structure with the minimal number of monitors.
NASA Astrophysics Data System (ADS)
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
Reliability-based trajectory optimization using nonintrusive polynomial chaos for Mars entry mission
NASA Astrophysics Data System (ADS)
Huang, Yuechen; Li, Haiyang
2018-06-01
This paper presents the reliability-based sequential optimization (RBSO) method to settle the trajectory optimization problem with parametric uncertainties in entry dynamics for Mars entry mission. First, the deterministic entry trajectory optimization model is reviewed, and then the reliability-based optimization model is formulated. In addition, the modified sequential optimization method, in which the nonintrusive polynomial chaos expansion (PCE) method and the most probable point (MPP) searching method are employed, is proposed to solve the reliability-based optimization problem efficiently. The nonintrusive PCE method contributes to the transformation between the stochastic optimization (SO) and the deterministic optimization (DO) and to the approximation of trajectory solution efficiently. The MPP method, which is used for assessing the reliability of constraints satisfaction only up to the necessary level, is employed to further improve the computational efficiency. The cycle including SO, reliability assessment and constraints update is repeated in the RBSO until the reliability requirements of constraints satisfaction are satisfied. Finally, the RBSO is compared with the traditional DO and the traditional sequential optimization based on Monte Carlo (MC) simulation in a specific Mars entry mission to demonstrate the effectiveness and the efficiency of the proposed method.
Sequential Auctions with Partially Substitutable Goods
NASA Astrophysics Data System (ADS)
Vetsikas, Ioannis A.; Jennings, Nicholas R.
In this paper, we examine a setting in which a number of partially substitutable goods are sold in sequential single unit auctions. Each bidder needs to buy exactly one of these goods. In previous work, this setting has been simplified by assuming that bidders do not know their valuations for all items a priori, but rather are informed of their true valuation for each item right before the corresponding auction takes place. This assumption simplifies the strategies of bidders, as the expected revenue from future auctions is the same for all bidders due to the complete lack of private information. In our analysis we don't make this assumption. This complicates the computation of the equilibrium strategies significantly. We examine this setting both for first and second-price auction variants, initially when the closing prices are not announced, for which we prove that sequential first and second-price auctions are revenue equivalent. Then we assume that the prices are announced; because of the asymmetry in the announced prices between the two auction variants, revenue equivalence does not hold in this case. We finish the paper, by giving some initial results about the case when free disposal is allowed, and therefore a bidder can purchase more than one item.
Blunt pancreatic trauma: A persistent diagnostic conundrum?
Kumar, Atin; Panda, Ananya; Gamanagatti, Shivanand
2016-01-01
Blunt pancreatic trauma is an uncommon injury but has high morbidity and mortality. In modern era of trauma care, pancreatic trauma remains a persistent challenge to radiologists and surgeons alike. Early detection of pancreatic trauma is essential to prevent subsequent complications. However early pancreatic injury is often subtle on computed tomography (CT) and can be missed unless specifically looked for. Signs of pancreatic injury on CT include laceration, transection, bulky pancreas, heterogeneous enhancement, peripancreatic fluid and signs of pancreatitis. Pan-creatic ductal injury is a vital decision-making parameter as ductal injury is an indication for laparotomy. While lacerations involving more than half of pancreatic parenchyma are suggestive of ductal injury on CT, ductal injuries can be directly assessed on magnetic resonance imaging (MRI) or encoscopic retrograde cholangio-pancreatography. Pancreatic trauma also shows temporal evolution with increase in extent of injury with time. Hence early CT scans may underestimate the extent of injures and sequential imaging with CT or MRI is important in pancreatic trauma. Sequential imaging is also needed for successful non-operative management of pancreatic injury. Accurate early detection on initial CT and adopting a multimodality and sequential imaging strategy can improve outcome in pancreatic trauma. PMID:26981225
Algorithms and Application of Sparse Matrix Assembly and Equation Solvers for Aeroacoustics
NASA Technical Reports Server (NTRS)
Watson, W. R.; Nguyen, D. T.; Reddy, C. J.; Vatsa, V. N.; Tang, W. H.
2001-01-01
An algorithm for symmetric sparse equation solutions on an unstructured grid is described. Efficient, sequential sparse algorithms for degree-of-freedom reordering, supernodes, symbolic/numerical factorization, and forward backward solution phases are reviewed. Three sparse algorithms for the generation and assembly of symmetric systems of matrix equations are presented. The accuracy and numerical performance of the sequential version of the sparse algorithms are evaluated over the frequency range of interest in a three-dimensional aeroacoustics application. Results show that the solver solutions are accurate using a discretization of 12 points per wavelength. Results also show that the first assembly algorithm is impractical for high-frequency noise calculations. The second and third assembly algorithms have nearly equal performance at low values of source frequencies, but at higher values of source frequencies the third algorithm saves CPU time and RAM. The CPU time and the RAM required by the second and third assembly algorithms are two orders of magnitude smaller than that required by the sparse equation solver. A sequential version of these sparse algorithms can, therefore, be conveniently incorporated into a substructuring for domain decomposition formulation to achieve parallel computation, where different substructures are handles by different parallel processors.
Parallel heuristics for scalable community detection
Lu, Hao; Halappanavar, Mahantesh; Kalyanaraman, Ananth
2015-08-14
Community detection has become a fundamental operation in numerous graph-theoretic applications. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method ismore » also inherently sequential, thereby limiting its scalability. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose heuristics that are designed to break the sequential barrier. For evaluation purposes, we implemented our heuristics using OpenMP multithreading, and tested them over real world graphs derived from multiple application domains. Compared to the serial Louvain implementation, our parallel implementation is able to produce community outputs with a higher modularity for most of the inputs tested, in comparable number or fewer iterations, while providing real speedups of up to 16x using 32 threads.« less
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
Seghouane, Abd-Krim; Iqbal, Asif
2017-09-01
Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.
The impact of uncertainty on optimal emission policies
NASA Astrophysics Data System (ADS)
Botta, Nicola; Jansson, Patrik; Ionescu, Cezar
2018-05-01
We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.
Parallel solution of closely coupled systems
NASA Technical Reports Server (NTRS)
Utku, S.; Salama, M.
1986-01-01
The odd-even permutation and associated unitary transformations for reordering the matrix coefficient A are employed as means of breaking the strong seriality which is characteristic of closely coupled systems. The nested dissection technique is also reviewed, and the equivalence between reordering A and dissecting its network is established. The effect of transforming A with odd-even permutation on its topology and the topology of its Cholesky factors is discussed. This leads to the construction of directed graphs showing the computational steps required for factoring A, their precedence relationships and their sequential and concurrent assignment to the available processors. Expressions for the speed-up and efficiency of using N processors in parallel relative to the sequential use of a single processor are derived from the directed graph. Similar expressions are also derived when the number of available processors is fewer than required.
Sequential monitoring of beach litter using webcams.
Kako, Shin'ichiro; Isobe, Atsuhiko; Magome, Shinya
2010-05-01
This study attempts to establish a system for the sequential monitoring of beach litter using webcams placed at the Ookushi beach, Goto Islands, Japan, to establish the temporal variability in the quantities of beach litter every 90 min over a one and a half year period. The time series of the quantities of beach litter, computed by counting pixels with a greater lightness than a threshold value in photographs, shows that litter does not increase monotonically on the beach, but fluctuates mainly on a monthly time scale or less. To investigate what factors influence this variability, the time derivative of the quantity of beach litter is compared with satellite-derived wind speeds. It is found that the beach litter quantities vary largely with winds, but there may be other influencing factors. (c) 2010 Elsevier Ltd. All rights reserved.
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
A Brief Analysis of Development Situations and Trend of Cloud Computing
NASA Astrophysics Data System (ADS)
Yang, Wenyan
2017-12-01
in recent years, the rapid development of Internet technology has radically changed people's work, learning and lifestyles. More and more activities are completed by virtue of computers and networks. The amount of information and data generated is bigger day by day, and people rely more on computer, which makes computing power of computer fail to meet demands of accuracy and rapidity from people. The cloud computing technology has experienced fast development, which is widely applied in the computer industry as a result of advantages of high precision, fast computing and easy usage. Moreover, it has become a focus in information research at present. In this paper, the development situations and trend of cloud computing shall be analyzed and researched.
Aguirre-Valencia, David; Posso-Osorio, Iván; Bravo, Juan-Carlos; Bonilla-Abadía, Fabio; Tobón, Gabriel J; Cañas, Carlos A
2017-09-01
Eosinophilic granulomatosis with polyangiitis (EGPA), formerly known as Churg-Strauss syndrome (CSS), is a small vessel vasculitis associated with eosinophilia and asthma. Clinical manifestations commonly seen in patients presenting with EGPA range from upper airway and lung involvement to neurological, cardiac, cutaneous, and renal manifestations. Treatment for severe presentations includes steroids, cyclophosphamide, plasmapheresis, and recently, rituximab. Rituximab is associated with a good response in the treatment of vasculitis, but a variable response for the control of allergic symptoms. Here, we report a 16-year-old female patient with severe EGPA (gastrointestinal and cutaneous vasculitis, rhinitis and asthma) refractory to conventional treatment. She was treated with rituximab, which enabled rapid control of the vasculitis component of the disease, but there was no response to rhinitis and asthma. Additionally, she developed severe bronchospasm during rituximab infusion. Sequential rituximab and omalizumab were initiated, leading to remission of all manifestations of vasculitis, rhinitis, and asthma, in addition to bronchospasm related to rituximab infusion.
Sputter deposition for multi-component thin films
Krauss, A.R.; Auciello, O.
1990-05-08
Ion beam sputter-induced deposition using a single ion beam and a multicomponent target is capable of reproducibly producing thin films of arbitrary composition, including those which are close to stoichiometry. Using a quartz crystal deposition monitor and a computer controlled, well-focused ion beam, this sputter-deposition approach is capable of producing metal oxide superconductors and semiconductors of the superlattice type such as GaAs-AlGaAs as well as layered metal/oxide/semiconductor/superconductor structures. By programming the dwell time for each target according to the known sputtering yield and desired layer thickness for each material, it is possible to deposit composite films from a well-controlled sub-monolayer up to thicknesses determined only by the available deposition time. In one embodiment, an ion beam is sequentially directed via a set of X-Y electrostatic deflection plates onto three or more different element or compound targets which are constituents of the desired film. In another embodiment, the ion beam is directed through an aperture in the deposition plate and is displaced under computer control to provide a high degree of control over the deposited layer. In yet another embodiment, a single fixed ion beam is directed onto a plurality of sputter targets in a sequential manner where the targets are each moved in alignment with the beam under computer control in forming a multilayer thin film. This controlled sputter-deposition approach may also be used with laser and electron beams. 10 figs.
A Feature Selection Algorithm to Compute Gene Centric Methylation from Probe Level Methylation Data.
Baur, Brittany; Bozdag, Serdar
2016-01-01
DNA methylation is an important epigenetic event that effects gene expression during development and various diseases such as cancer. Understanding the mechanism of action of DNA methylation is important for downstream analysis. In the Illumina Infinium HumanMethylation 450K array, there are tens of probes associated with each gene. Given methylation intensities of all these probes, it is necessary to compute which of these probes are most representative of the gene centric methylation level. In this study, we developed a feature selection algorithm based on sequential forward selection that utilized different classification methods to compute gene centric DNA methylation using probe level DNA methylation data. We compared our algorithm to other feature selection algorithms such as support vector machines with recursive feature elimination, genetic algorithms and ReliefF. We evaluated all methods based on the predictive power of selected probes on their mRNA expression levels and found that a K-Nearest Neighbors classification using the sequential forward selection algorithm performed better than other algorithms based on all metrics. We also observed that transcriptional activities of certain genes were more sensitive to DNA methylation changes than transcriptional activities of other genes. Our algorithm was able to predict the expression of those genes with high accuracy using only DNA methylation data. Our results also showed that those DNA methylation-sensitive genes were enriched in Gene Ontology terms related to the regulation of various biological processes.
CT analysis of the effect of pirfenidone in patients with idiopathic pulmonary fibrosis.
Iwasawa, Tae; Ogura, Takashi; Sakai, Fumikazu; Kanauchi, Tetsu; Komagata, Takanobu; Baba, Tomohisa; Gotoh, Toshiyuki; Morita, Satoshi; Yazawa, Takuya; Inoue, Tomio
2014-01-01
Pirfenidone is a new, anti-fibrotic drug used for the treatment of idiopathic pulmonary fibrosis (IPF). The aim of this study was to evaluate the utility of computed tomography (CT) in the imaging assessment of the response to pirfenidone therapy. Subjects were 78 patients with IPF who underwent CT on two occasions with one-year interval (38 consecutive patients treated with pirfenidone and 40 age-matched control). Changes in the fibrous lesion on sequential CTs were assessed as visual score by two radiologists. We measured the volume and change per year of fibrous pattern (F-pattern) quantitatively using a computer-aided system on sequential CTs. The baseline vital capacity (%pred VC) was 74.0 ± 14.0% in the pirfenidone group and 74.6 ± 16.6% in controls (p=NS). Deterioration of respiratory status was defined as 10% or greater decline in %pred VC value after 12-month treatment. A significantly larger proportion of pirfenidone-treated patients showed stable respiratory status (21 of 38, 65.6%) than the control (15 of 40, 37.5%). The change in fibrous lesion was significantly smaller in the pirfenidone group than the control in both of visual score (p=0.006) and computer analysis (p<0.001). The decline in VC correlated significantly with the increase in fibrotic lesion (p<0.001). CT can be used to assess pirfenidone-induced slowing of progression of pulmonary fibrosis. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sputter deposition for multi-component thin films
Krauss, Alan R.; Auciello, Orlando
1990-01-01
Ion beam sputter-induced deposition using a single ion beam and a multicomponent target is capable of reproducibly producing thin films of arbitrary composition, including those which are close to stoichiometry. Using a quartz crystal deposition monitor and a computer controlled, well-focused ion beam, this sputter-deposition approach is capable of producing metal oxide superconductors and semiconductors of the superlattice type such as GaAs-AlGaAs as well as layered metal/oxide/semiconductor/superconductor structures. By programming the dwell time for each target according to the known sputtering yield and desired layer thickness for each material, it is possible to deposit composite films from a well-controlled sub-monolayer up to thicknesses determined only by the available deposition time. In one embodiment, an ion beam is sequentially directed via a set of X-Y electrostatic deflection plates onto three or more different element or compound targets which are constituents of the desired film. In another embodiment, the ion beam is directed through an aperture in the deposition plate and is displaced under computer control to provide a high degree of control over the deposited layer. In yet another embodiment, a single fixed ion beam is directed onto a plurality of sputter targets in a sequential manner where the targets are each moved in alignment with the beam under computer control in forming a multilayer thin film. This controlled sputter-deposition approach may also be used with laser and electron beams.
OpenCL-based vicinity computation for 3D multiresolution mesh compression
NASA Astrophysics Data System (ADS)
Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri
2017-03-01
3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Mehee; Thoma, Miranda; Tolekidis, George
Ondine's curse is a rare, potentially life-threatening disorder characterized by loss of automatic breathing during sleep and preserved voluntary breathing. It is seldom encountered in the radiotherapy clinic but can pose significant technical challenges and safety concerns in the delivery of a prescribed radiation course. We report a unique case of successful delivery of radiotherapy for ependymoma in a patient with Ondine's curse. A 53-year-old gentleman presented with vertigo when lying down. Brain magnetic resonance imaging revealed an enhancing mass in the floor of the fourth ventricle. He underwent maximal safe resection. Pathology revealed ependymoma. The patient was referred formore » radiotherapy. Computed tomography simulation was performed in supine position with 3-point thermoplastic mask immobilization. Sequential TomoTherapy plans were developed. At first scheduled treatment, shortly after mask placement, his arms went limp and he was unresponsive. Vitals showed oxygen saturation 83%, pulse 127, and blood pressure 172/97 mm Hg. He was diagnosed with Ondine's curse thought secondary to previous brainstem damage; the combination of lying flat and pressure from the mask was causing him to go into respiratory arrest. As supine positioning did not seem clinically advisable, he was simulated in prone position. A RapidArc plan and a back-up conformal plan were developed. Prescriptions were modified to meet conservative organs-at-risk constraints. Several strategies were used to minimize uncertainties in set-up reproducibility associated with prone positioning. He tolerated prone RapidArc treatments well. The report highlights the importance of applying practical patient safety and treatment planning/delivery strategies in the management of this challenging case.« less
Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging
NASA Astrophysics Data System (ADS)
Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.
2006-12-01
In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.
Observation Uncertainty in Gaussian Sensor Networks
2006-01-23
Ziv , J., and Lempel , A. A universal algorithm for sequential data compression . IEEE Transactions on Information Theory 23, 3 (1977), 337–343. 73 ...using the Lempel - Ziv algorithm [42], context-tree weighting [41], or the Burrows-Wheeler Trans- form [4], [15], for example. These source codes will...and Computation (Monticello, IL, September 2004). [4] Burrows, M., and Wheeler, D. A block sorting lossless data compression algorithm . Tech.