Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
NASA Astrophysics Data System (ADS)
Thamvichai, Ratchaneekorn; Huang, Liang-Chih; Ashok, Amit; Gong, Qian; Coccarelli, David; Greenberg, Joel A.; Gehm, Michael E.; Neifeld, Mark A.
2017-05-01
We employ an adaptive measurement system, based on sequential hypotheses testing (SHT) framework, for detecting material-based threats using experimental data acquired on an X-ray experimental testbed system. This testbed employs 45-degree fan-beam geometry and 15 views over a 180-degree span to generate energy sensitive X-ray projection data. Using this testbed system, we acquire multiple view projection data for 200 bags. We consider an adaptive measurement design where the X-ray projection measurements are acquired in a sequential manner and the adaptation occurs through the choice of the optimal "next" source/view system parameter. Our analysis of such an adaptive measurement design using the experimental data demonstrates a 3x-7x reduction in the probability of error relative to a static measurement design. Here the static measurement design refers to the operational system baseline that corresponds to a sequential measurement using all the available sources/views. We also show that by using adaptive measurements it is possible to reduce the number of sources/views by nearly 50% compared a system that relies on static measurements.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
A sequential adaptive experimental design procedure for a related problem is studied. It is assumed that a finite set of potential linear models relating certain controlled variables to an observed variable is postulated, and that exactly one of these models is correct. The problem is to sequentially design most informative experiments so that the correct model equation can be determined with as little experimentation as possible. Discussion includes: structure of the linear models; prerequisite distribution theory; entropy functions and the Kullback-Leibler information function; the sequential decision procedure; and computer simulation results. An example of application is given.
Sequential experimental design based generalised ANOVA
NASA Astrophysics Data System (ADS)
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Van Derlinden, E; Bernaerts, K; Van Impe, J F
2010-05-21
Optimal experiment design for parameter estimation (OED/PE) has become a popular tool for efficient and accurate estimation of kinetic model parameters. When the kinetic model under study encloses multiple parameters, different optimization strategies can be constructed. The most straightforward approach is to estimate all parameters simultaneously from one optimal experiment (single OED/PE strategy). However, due to the complexity of the optimization problem or the stringent limitations on the system's dynamics, the experimental information can be limited and parameter estimation convergence problems can arise. As an alternative, we propose to reduce the optimization problem to a series of two-parameter estimation problems, i.e., an optimal experiment is designed for a combination of two parameters while presuming the other parameters known. Two different approaches can be followed: (i) all two-parameter optimal experiments are designed based on identical initial parameter estimates and parameters are estimated simultaneously from all resulting experimental data (global OED/PE strategy), and (ii) optimal experiments are calculated and implemented sequentially whereby the parameter values are updated intermediately (sequential OED/PE strategy). This work exploits OED/PE for the identification of the Cardinal Temperature Model with Inflection (CTMI) (Rosso et al., 1993). This kinetic model describes the effect of temperature on the microbial growth rate and encloses four parameters. The three OED/PE strategies are considered and the impact of the OED/PE design strategy on the accuracy of the CTMI parameter estimation is evaluated. Based on a simulation study, it is observed that the parameter values derived from the sequential approach deviate more from the true parameters than the single and global strategy estimates. The single and global OED/PE strategies are further compared based on experimental data obtained from design implementation in a bioreactor. Comparable estimates are obtained, but global OED/PE estimates are, in general, more accurate and reliable. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-01-01
... sequential performances of a material control test which is designed to detect anomalies potentially... capability required by § 74.53. Material control test means a comparison of a pre-established alarm threshold... into practical application for experimental and demonstration purposes, including the experimental...
Code of Federal Regulations, 2013 CFR
2013-01-01
... sequential performances of a material control test which is designed to detect anomalies potentially... capability required by § 74.53. Material control test means a comparison of a pre-established alarm threshold... into practical application for experimental and demonstration purposes, including the experimental...
Code of Federal Regulations, 2010 CFR
2010-01-01
... sequential performances of a material control test which is designed to detect anomalies potentially... capability required by § 74.53. Material control test means a comparison of a pre-established alarm threshold... into practical application for experimental and demonstration purposes, including the experimental...
Code of Federal Regulations, 2011 CFR
2011-01-01
... sequential performances of a material control test which is designed to detect anomalies potentially... capability required by § 74.53. Material control test means a comparison of a pre-established alarm threshold... into practical application for experimental and demonstration purposes, including the experimental...
Code of Federal Regulations, 2014 CFR
2014-01-01
... sequential performances of a material control test which is designed to detect anomalies potentially... capability required by § 74.53. Material control test means a comparison of a pre-established alarm threshold... into practical application for experimental and demonstration purposes, including the experimental...
Robust parameter design for automatically controlled systems and nanostructure synthesis
NASA Astrophysics Data System (ADS)
Dasgupta, Tirthankar
2007-12-01
This research focuses on developing comprehensive frameworks for developing robust parameter design methodology for dynamic systems with automatic control and for synthesis of nanostructures. In many automatically controlled dynamic processes, the optimal feedback control law depends on the parameter design solution and vice versa and therefore an integrated approach is necessary. A parameter design methodology in the presence of feedback control is developed for processes of long duration under the assumption that experimental noise factors are uncorrelated over time. Systems that follow a pure-gain dynamic model are considered and the best proportional-integral and minimum mean squared error control strategies are developed by using robust parameter design. The proposed method is illustrated using a simulated example and a case study in a urea packing plant. This idea is also extended to cases with on-line noise factors. The possibility of integrating feedforward control with a minimum mean squared error feedback control scheme is explored. To meet the needs of large scale synthesis of nanostructures, it is critical to systematically find experimental conditions under which the desired nanostructures are synthesized reproducibly, at large quantity and with controlled morphology. The first part of the research in this area focuses on modeling and optimization of existing experimental data. Through a rigorous statistical analysis of experimental data, models linking the probabilities of obtaining specific morphologies to the process variables are developed. A new iterative algorithm for fitting a Multinomial GLM is proposed and used. The optimum process conditions, which maximize the above probabilities and make the synthesis process less sensitive to variations of process variables around set values, are derived from the fitted models using Monte-Carlo simulations. The second part of the research deals with development of an experimental design methodology, tailor-made to address the unique phenomena associated with nanostructure synthesis. A sequential space filling design called Sequential Minimum Energy Design (SMED) for exploring best process conditions for synthesis of nanowires. The SMED is a novel approach to generate sequential designs that are model independent, can quickly "carve out" regions with no observable nanostructure morphology, and allow for the exploration of complex response surfaces.
Sequential Design of Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela
2017-06-30
A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of COmore » 2 capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO 2 weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.« less
Genetic Parallel Programming: design and implementation.
Cheang, Sin Man; Leung, Kwong Sak; Lee, Kin Hong
2006-01-01
This paper presents a novel Genetic Parallel Programming (GPP) paradigm for evolving parallel programs running on a Multi-Arithmetic-Logic-Unit (Multi-ALU) Processor (MAP). The MAP is a Multiple Instruction-streams, Multiple Data-streams (MIMD), general-purpose register machine that can be implemented on modern Very Large-Scale Integrated Circuits (VLSIs) in order to evaluate genetic programs at high speed. For human programmers, writing parallel programs is more difficult than writing sequential programs. However, experimental results show that GPP evolves parallel programs with less computational effort than that of their sequential counterparts. It creates a new approach to evolving a feasible problem solution in parallel program form and then serializes it into a sequential program if required. The effectiveness and efficiency of GPP are investigated using a suite of 14 well-studied benchmark problems. Experimental results show that GPP speeds up evolution substantially.
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Sequential causal inference: Application to randomized trials of adaptive treatment strategies
Dawson, Ree; Lavori, Philip W.
2009-01-01
SUMMARY Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the ‘standard’ approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal ‘one-step’ estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies. PMID:17914714
ERIC Educational Resources Information Center
Phan, Huy P.; Ngu, Bing H.
2017-01-01
In social sciences, the use of stringent methodological approaches is gaining increasing emphasis. Researchers have recognized the limitations of cross-sectional, non-manipulative data in the study of causality. True experimental designs, in contrast, are preferred as they represent rigorous standards for achieving causal flows between variables.…
Introducing Science Experiments to Rote-Learning Classes in Pakistani Middle Schools
ERIC Educational Resources Information Center
Pell, Anthony William; Iqbal, Hafiz Muhammad; Sohail, Shahida
2010-01-01
A mixed-methods sequential research design has been used to test the effect of introducing teacher science demonstrations to a traditional book-learning sample of 384 Grade 7 boys and girls from five schools in Lahore, Pakistan. In the quasi-experimental quantitative study, the eight classes of comparable ability were designated either…
Problems of the Randomization Test for AB Designs
ERIC Educational Resources Information Center
Manolov, Rumen; Solanas, Antonio
2009-01-01
N = 1 designs imply repeated registrations of the behaviour of the same experimental unit and the measurements obtained are often few due to time limitations, while they are also likely to be sequentially dependent. The analytical techniques needed to enhance statistical and clinical decision making have to deal with these problems. Different…
NASA Astrophysics Data System (ADS)
Dassekpo, Jean-Baptiste Mawulé; Zha, Xiaoxiong; Zhan, Jiapeng; Ning, Jiaqian
Geopolymer is an energy efficient and sustainable material that is currently used in construction industry as an alternative for Portland cement. As a new material, specific mix design method is essential and efforts have been made to develop a mix design procedure with the main focus on achieving better compressive strength and economy. In this paper, a sequential addition of synthesis parameters such as fly ash-sand, alkaline liquids, plasticizer and additional water at well-defined time intervals was investigated. A total of 4 mix procedures were used to study the compressive performance on fly ash-based geopolymer mortar and the results of each method were analyzed and discussed. Experimental results show that the sequential addition of sodium hydroxide (NaOH), sodium silicate (Na2SiO3), plasticizer (PL), followed by adding water (WA) increases considerably the compressive strengths of the geopolymer-based mortar. These results clearly demonstrate the high significant influence of sequential addition of synthesis parameters on geopolymer materials compressive properties, and also provide a new mixing method for the preparation of geopolymer paste, mortar and concrete.
GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.
Whitehead, John; Horby, Peter
2017-03-01
Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heaney, Mike
Statistically designed experiments can save researchers time and money by reducing the number of necessary experimental trials, while resulting in more conclusive experimental results. Surprisingly, many researchers are still not aware of this efficient and effective experimental methodology. As reported in a 2013 article from Chemical & Engineering News, there has been a resurgence of this methodology in recent years (http://cen.acs.org/articles/91/i13/Design-Experiments-Makes-Comeback.html?h=2027056365). This presentation will provide a brief introduction to statistically designed experiments. The main advantages will be reviewed along with the some basic concepts such as factorial and fractional factorial designs. The recommended sequential approach to experiments will be introducedmore » and finally a case study will be presented to demonstrate this methodology.« less
Optimization and Development of a Human Scent Collection Method
2007-06-04
19. Schoon, G. A. A., Scent Identification Lineups by Dogs (Canis familiaris): Experimental Design and Forensic Application. Applied Animal...Parker, Lloyd R., Morgan, Stephen L., Deming, Stanley N., Sequential Simplex Optimization. Chemometrics Series, ed. S.D. Brown. 1991, Boca Raton
NASA Astrophysics Data System (ADS)
Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing
2018-05-01
The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.
ERIC Educational Resources Information Center
Matlen, Bryan J.; Klahr, David
2013-01-01
We report the effect of different sequences of high vs low levels of instructional guidance on children's immediate learning and long-term transfer of simple experimental design procedures and concepts, often called "CVS" (Control of Variables Strategy). Third-grade children (N = 57) received instruction in CVS via one of four possible orderings…
ERIC Educational Resources Information Center
Cook, Ryan; Hannon, Drew; Southard, Jonathan N.; Majumdar, Sudipta
2018-01-01
A one semester undergraduate biochemistry laboratory experience is described for an understanding of recombinant technology from gene cloning to protein characterization. An integrated experimental design includes three sequential modules: molecular cloning, protein expression and purification, and protein analysis and characterization. Students…
NASA Astrophysics Data System (ADS)
Hosking, Michael Robert
This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our explanation through tests of reduced rank kriging's performance over different situations. In total, reduced rank kriging is a useful tool for simulation metamodeling. For the third contribution we will answer the question of how to find the best rank for reduced rank kriging. We do this by creating an alternative method which does not need to search for a particular rank. Instead it uses all potential ranks; we call this approach omnirank kriging. This modification realizes the potential gains from reduced rank kriging and provides a workable methodology for simulation metamodeling. Finally, we will demonstrate the use and value of these developments on two case studies, a clinic operation problem and a location problem. These cases will validate the value of this research. Simulation metamodeling always attempts to extract maximum information from limited data. Each one of these contributions will allow analysts to make better use of their constrained computational budgets.
NASA Astrophysics Data System (ADS)
Maximov, Ivan I.; Vinding, Mads S.; Tse, Desmond H. Y.; Nielsen, Niels Chr.; Shah, N. Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community.
Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.
Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R
2013-12-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.
Sequential Processing Deficits in Schizophrenia: Relationship to Neuropsychology and Genetics
Hill, S. Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E.; Hochberger, William C.; Bishop, Jeffrey R.
2014-01-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. PMID:24119464
Barlow, D H; Hayes, S C
1979-01-01
A little used and often confused design, capable of comparing two treatments within a single subject, has been termed, variously, a multielement baseline design, a multiple schedule design, and a randomization design. The background of these terms is reviewed, and a new, more descriptive term, Alternating Treatments Design, is proposed. Critical differences between this design and a Simultaneous Treatment Design are outlined, and experimental questions answerable by each design are noted. Potential problems with multiple treatment interference in this procedure are divided into sequential confounding, carryover effects, and alternation effects and the importance of these issues vis-a-vis other single-case experimental designs is considered. Methods of minimizing multiple treatment interference as well as methods of studying these effects are outlined. Finally, appropriate uses of Alternating Treatments Designs are described and discussed in the context of recent examples. PMID:489478
ERIC Educational Resources Information Center
Soltero-González, Lucinda; Sparrow, Wendy; Butvilofsky, Sandra; Escamilla, Kathy; Hopewell, Susan
2016-01-01
This longitudinal study examined whether the implementation of a Spanish-English paired literacy approach provides an academic advantage to emerging bilingual students over a sequential literacy approach. The study employed a quasi-experimental design. It compared the biliteracy outcomes of third-grade emerging bilingual learners participating in…
ERIC Educational Resources Information Center
Chapman, Dane M.; And Others
Three critical procedural skills in emergency medicine were evaluated using three assessment modalities--written, computer, and animal model. The effects of computer practice and previous procedure experience on skill competence were also examined in an experimental sequential assessment design. Subjects were six medical students, six residents,…
ERIC Educational Resources Information Center
Datchuk, Shawn M.; Kubina, Richard M., Jr.
2017-01-01
The present study used a multiple-baseline, single-case experimental design to investigate the effects of a multicomponent intervention on construction of simple sentences and word sequences. The intervention entailed sequential delivery of sentence instruction and frequency building to a performance criterion and paragraph instruction.…
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Constrained optimization of sequentially generated entangled multiqubit states
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Weichselbaum, Andreas; Lamata, Lucas; Pérez-García, David; von Delft, Jan; Solano, Enrique
2009-08-01
We demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give paradigmatic examples that may be of interest for theoretical and experimental developments.
ON THE CONSTRUCTION OF LATIN SQUARES COUNTERBALANCED FOR IMMEDIATE SEQUENTIAL EFFECTS.
ERIC Educational Resources Information Center
HOUSTON, TOM R., JR.
THIS REPORT IS ONE OF A SERIES DESCRIBING NEW DEVELOPMENTS IN THE AREA OF RESEARCH METHODOLOGY. IT DEALS WITH LATIN SQUARES AS A CONTROL FOR PROGRESSIVE AND ADJACENCY EFFECTS IN EXPERIMENTAL DESIGNS. THE HISTORY OF LATIN SQUARES IS ALSO REVIEWED, AND SEVERAL ALGORITHMS FOR THE CONSTRUCTION OF LATIN AND GRECO-LATIN SQUARES ARE PROPOSED. THE REPORT…
Maximov, Ivan I; Vinding, Mads S; Tse, Desmond H Y; Nielsen, Niels Chr; Shah, N Jon
2015-05-01
There is an increasing need for development of advanced radio-frequency (RF) pulse techniques in modern magnetic resonance imaging (MRI) systems driven by recent advancements in ultra-high magnetic field systems, new parallel transmit/receive coil designs, and accessible powerful computational facilities. 2D spatially selective RF pulses are an example of advanced pulses that have many applications of clinical relevance, e.g., reduced field of view imaging, and MR spectroscopy. The 2D spatially selective RF pulses are mostly generated and optimised with numerical methods that can handle vast controls and multiple constraints. With this study we aim at demonstrating that numerical, optimal control (OC) algorithms are efficient for the design of 2D spatially selective MRI experiments, when robustness towards e.g. field inhomogeneity is in focus. We have chosen three popular OC algorithms; two which are gradient-based, concurrent methods using first- and second-order derivatives, respectively; and a third that belongs to the sequential, monotonically convergent family. We used two experimental models: a water phantom, and an in vivo human head. Taking into consideration the challenging experimental setup, our analysis suggests the use of the sequential, monotonic approach and the second-order gradient-based approach as computational speed, experimental robustness, and image quality is key. All algorithms used in this work were implemented in the MATLAB environment and are freely available to the MRI community. Copyright © 2015 Elsevier Inc. All rights reserved.
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
A practical limit to trials needed in one-person randomized controlled experiments.
Alemi, Roshan; Alemi, Farrokh
2007-01-01
Recently in this journal, J. Olsson and colleagues suggested the use of factorial experimental designs to guide a patient's efforts to choose among multiple interventions. These authors argue that factorial design, where every possible combination of the interventions is tried, is superior to sequential trial and errors. Factorial design is efficient in identifying the effectiveness of interventions (factor effect). Most patients care only about feeling better and not why their conditions are improving. If the goal of the patient is to get better and not to estimate the factor effect, then no control groups are needed. In this article, we show a modification in the factorial design of experiments proposed by Olsson and colleagues where a full-factorial design is planned, but experimentation is stopped when the patient's condition improves. With this modification, the number of trials is radically fewer than those needed by factorial design. For example, a patient trying out 4 different interventions with a median probability of success of .50 is expected to need 2 trials before stopping the experimentation in comparison with 32 in a full-factorial design.
A technique for sequential segmental neuromuscular stimulation with closed loop feedback control.
Zonnevijlle, Erik D H; Abadia, Gustavo Perez; Somia, Naveen N; Kon, Moshe; Barker, John H; Koenig, Steven; Ewert, D L; Stremel, Richard W
2002-01-01
In dynamic myoplasty, dysfunctional muscle is assisted or replaced with skeletal muscle from a donor site. Electrical stimulation is commonly used to train and animate the skeletal muscle to perform its new task. Due to simultaneous tetanic contractions of the entire myoplasty, muscles are deprived of perfusion and fatigue rapidly, causing long-term problems such as excessive scarring and muscle ischemia. Sequential stimulation contracts part of the muscle while other parts rest, thus significantly improving blood perfusion. However, the muscle still fatigues. In this article, we report a test of the feasibility of using closed-loop control to economize the contractions of the sequentially stimulated myoplasty. A simple stimulation algorithm was developed and tested on a sequentially stimulated neo-sphincter designed from a canine gracilis muscle. Pressure generated in the lumen of the myoplasty neo-sphincter was used as feedback to regulate the stimulation signal via three control parameters, thereby optimizing the performance of the myoplasty. Additionally, we investigated and compared the efficiency of amplitude and frequency modulation techniques. Closed-loop feedback enabled us to maintain target pressures within 10% deviation using amplitude modulation and optimized control parameters (correction frequency = 4 Hz, correction threshold = 4%, and transition time = 0.3 s). The large-scale stimulation/feedback setup was unfit for chronic experimentation, but can be used as a blueprint for a small-scale version to unveil the theoretical benefits of closed-loop control in chronic experimentation.
Thomas P. Holmes; Kevin J. Boyle
2005-01-01
A hybrid stated-preference model is presented that combines the referendum contingent valuation response format with an experimentally designed set of attributes. A sequence of valuation questions is asked to a random sample in a mailout mail-back format. Econometric analysis shows greater discrimination between alternatives in the final choice in the sequence, and the...
ERIC Educational Resources Information Center
Baeten, Marlies; Simons, Mathea
2016-01-01
This study focuses on student teachers' team teaching. Two team teaching models (sequential and parallel teaching) were applied by 14 student teachers in a quasi-experimental design. When implementing new teaching models, it is important to take into account the perspectives of all actors involved. Although learners are key actors in the teaching…
A Dual-Beam Irradiation Facility for a Novel Hybrid Cancer Therapy
NASA Astrophysics Data System (ADS)
Sabchevski, Svilen Petrov; Idehara, Toshitaka; Ishiyama, Shintaro; Miyoshi, Norio; Tatsukawa, Toshiaki
2013-01-01
In this paper we present the main ideas and discuss both the feasibility and the conceptual design of a novel hybrid technique and equipment for an experimental cancer therapy based on the simultaneous and/or sequential application of two beams, namely a beam of neutrons and a CW (continuous wave) or intermittent sub-terahertz wave beam produced by a gyrotron for treatment of cancerous tumors. The main simulation tools for the development of the computer aided design (CAD) of the prospective experimental facility for clinical trials and study of such new medical technology are briefly reviewed. Some tasks for a further continuation of this feasibility analysis are formulated as well.
NASA Technical Reports Server (NTRS)
Layland, J. W.
1974-01-01
An approximate analysis of the effect of a noisy carrier reference on the performance of sequential decoding is presented. The analysis uses previously developed techniques for evaluating noisy reference performance for medium-rate uncoded communications adapted to sequential decoding for data rates of 8 to 2048 bits/s. In estimating the ten to the minus fourth power deletion probability thresholds for Helios, the model agrees with experimental data to within the experimental tolerances. The computational problem involved in sequential decoding, carrier loop effects, the main characteristics of the medium-rate model, modeled decoding performance, and perspectives on future work are discussed.
Time-resolved non-sequential ray-tracing modelling of non-line-of-sight picosecond pulse LIDAR
NASA Astrophysics Data System (ADS)
Sroka, Adam; Chan, Susan; Warburton, Ryan; Gariepy, Genevieve; Henderson, Robert; Leach, Jonathan; Faccio, Daniele; Lee, Stephen T.
2016-05-01
The ability to detect motion and to track a moving object that is hidden around a corner or behind a wall provides a crucial advantage when physically going around the obstacle is impossible or dangerous. One recently demonstrated approach to achieving this goal makes use of non-line-of-sight picosecond pulse laser ranging. This approach has recently become interesting due to the availability of single-photon avalanche diode (SPAD) receivers with picosecond time resolution. We present a time-resolved non-sequential ray-tracing model and its application to indirect line-of-sight detection of moving targets. The model makes use of the Zemax optical design programme's capabilities in stray light analysis where it traces large numbers of rays through multiple random scattering events in a 3D non-sequential environment. Our model then reconstructs the generated multi-segment ray paths and adds temporal analysis. Validation of this model against experimental results is shown. We then exercise the model to explore the limits placed on system design by available laser sources and detectors. In particular we detail the requirements on the laser's pulse energy, duration and repetition rate, and on the receiver's temporal response and sensitivity. These are discussed in terms of the resulting implications for achievable range, resolution and measurement time while retaining eye-safety with this technique. Finally, the model is used to examine potential extensions to the experimental system that may allow for increased localisation of the position of the detected moving object, such as the inclusion of multiple detectors and/or multiple emitters.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Hybrid Computerized Adaptive Testing: From Group Sequential Design to Fully Sequential Design
ERIC Educational Resources Information Center
Wang, Shiyu; Lin, Haiyan; Chang, Hua-Hua; Douglas, Jeff
2016-01-01
Computerized adaptive testing (CAT) and multistage testing (MST) have become two of the most popular modes in large-scale computer-based sequential testing. Though most designs of CAT and MST exhibit strength and weakness in recent large-scale implementations, there is no simple answer to the question of which design is better because different…
NASA Astrophysics Data System (ADS)
Liu, Wei; Yao, Kainan; Chen, Lu; Huang, Danian; Cao, Jingtai; Gu, Haijun
2018-03-01
Based-on the previous study on the theory of the sequential pyramid wavefront sensor (SPWFS), in this paper, the SPWFS is first applied to the coherent free space optical communications (FSOC) with more flexible spatial resolution and higher sensitivity than the Shack-Hartmann wavefront sensor, and with higher uniformity of intensity distribution and much simpler than the pyramid wavefront sensor. Then, the mixing efficiency (ME) and the bit error rate (BER) of the coherent FSOC are analyzed during the aberrations correction through numerical simulation with binary phase shift keying (BPSK) modulation. Finally, an experimental AO system based-on SPWFS is setup, and the experimental data is used to analyze the ME and BER of homodyne detection with BPSK modulation. The results show that the AO system based-on SPWFS can increase ME and decrease BER effectively. The conclusions of this paper provide a new method of wavefront sensing for designing the AO system for a coherent FSOC system.
Development of New Lipid-Based Paclitaxel Nanoparticles Using Sequential Simplex Optimization
Dong, Xiaowei; Mattingly, Cynthia A.; Tseng, Michael; Cho, Moo; Adams, Val R.; Mumper, Russell J.
2008-01-01
The objective of these studies was to develop Cremophor-free lipid-based paclitaxel (PX) nanoparticle formulations prepared from warm microemulsion precursors. To identify and optimize new nanoparticles, experimental design was performed combining Taguchi array and sequential simplex optimization. The combination of Taguchi array and sequential simplex optimization efficiently directed the design of paclitaxel nanoparticles. Two optimized paclitaxel nanoparticles (NPs) were obtained: G78 NPs composed of glyceryl tridodecanoate (GT) and polyoxyethylene 20-stearyl ether (Brij 78), and BTM NPs composed of Miglyol 812, Brij 78 and D-alpha-tocopheryl polyethylene glycol 1000 succinate (TPGS). Both nanoparticles successfully entrapped paclitaxel at a final concentration of 150 μg/ml (over 6% drug loading) with particle sizes less than 200 nm and over 85% of entrapment efficiency. These novel paclitaxel nanoparticles were stable at 4°C over three months and in PBS at 37°C over 102 hours as measured by physical stability. Release of paclitaxel was slow and sustained without initial burst release. Cytotoxicity studies in MDA-MB-231 cancer cells showed that both nanoparticles have similar anticancer activities compared to Taxol®. Interestingly, PX BTM nanocapsules could be lyophilized without cryoprotectants. The lyophilized powder comprised only of PX BTM NPs in water could be rapidly rehydrated with complete retention of original physicochemical properties, in-vitro release properties, and cytotoxicity profile. Sequential Simplex Optimization has been utilized to identify promising new lipid-based paclitaxel nanoparticles having useful attributes. PMID:19111929
Optical flip-flops and sequential logic circuits using a liquid crystal light valve
NASA Technical Reports Server (NTRS)
Fatehi, M. T.; Collins, S. A., Jr.; Wasmundt, K. C.
1984-01-01
This paper is concerned with the application of optics to digital computing. A Hughes liquid crystal light valve is used as an active optical element where a weak light beam can control a strong light beam with either a positive or negative gain characteristic. With this device as the central element the ability to produce bistable states from which different types of flip-flop can be implemented is demonstrated. In this paper, some general comments are first presented on digital computing as applied to optics. This is followed by a discussion of optical implementation of various types of flip-flop. These flip-flops are then used in the design of optical equivalents to a few simple sequential circuits such as shift registers and accumulators. As a typical sequential machine, a schematic layout for an optical binary temporal integrator is presented. Finally, a suggested experimental configuration for an optical master-slave flip-flop array is given.
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
A Parametric Geometry Computational Fluid Dynamics (CFD) Study Utilizing Design of Experiments (DOE)
NASA Technical Reports Server (NTRS)
Rhew, Ray D.; Parker, Peter A.
2007-01-01
Design of Experiments (DOE) was applied to the LAS geometric parameter study to efficiently identify and rank primary contributors to integrated drag over the vehicles ascent trajectory in an order of magnitude fewer CFD configurations thereby reducing computational resources and solution time. SME s were able to gain a better understanding on the underlying flowphysics of different geometric parameter configurations through the identification of interaction effects. An interaction effect, which describes how the effect of one factor changes with respect to the levels of other factors, is often the key to product optimization. A DOE approach emphasizes a sequential approach to learning through successive experimentation to continuously build on previous knowledge. These studies represent a starting point for expanded experimental activities that will eventually cover the entire design space of the vehicle and flight trajectory.
Politis, Stavros N; Rekkas, Dimitrios M
2017-04-01
A novel hot melt direct pelletization method was developed, characterized and optimized, using statistical thinking and experimental design tools. Mixtures of carnauba wax (CW) and HPMC K100M were spheronized using melted gelucire 50-13 as a binding material (BM). Experimentation was performed sequentially; a fractional factorial design was set up initially to screen the factors affecting the process, namely spray rate, quantity of BM, rotor speed, type of rotor disk, lubricant-glidant presence, additional spheronization time, powder feeding rate and quantity. From the eight factors assessed, three were further studied during process optimization (spray rate, quantity of BM and powder feeding rate), at different ratios of the solid mixture of CW and HPMC K100M. The study demonstrated that the novel hot melt process is fast, efficient, reproducible and predictable. Therefore, it can be adopted in a lean and agile manufacturing setting for the production of flexible pellet dosage forms with various release rates easily customized between immediate and modified delivery.
ERIC Educational Resources Information Center
Ebadi, Saman; Rahimi, Masoud
2017-01-01
This article reports the results of a sequential explanatory mixed-methods approach to explore the impact of online peer-editing using Google Docs and peer-editing in a face-to-face classroom on EFL learners' academic writing skills. As the study adopted a quasi-experimental design, two intact classes, each with ten EFL learners, attending an…
Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations
ERIC Educational Resources Information Center
Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad
2016-01-01
In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…
Evaluation of concurrent priority queue algorithms. Technical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Q.
1991-02-01
The priority queue is a fundamental data structure that is used in a large variety of parallel algorithms, such as multiprocessor scheduling and parallel best-first search of state-space graphs. This thesis addresses the design and experimental evaluation of two novel concurrent priority queues: a parallel Fibonacci heap and a concurrent priority pool, and compares them with the concurrent binary heap. The parallel Fibonacci heap is based on the sequential Fibonacci heap, which is theoretically the most efficient data structure for sequential priority queues. This scheme not only preserves the efficient operation time bounds of its sequential counterpart, but also hasmore » very low contention by distributing locks over the entire data structure. The experimental results show its linearly scalable throughput and speedup up to as many processors as tested (currently 18). A concurrent access scheme for a doubly linked list is described as part of the implementation of the parallel Fibonacci heap. The concurrent priority pool is based on the concurrent B-tree and the concurrent pool. The concurrent priority pool has the highest throughput among the priority queues studied. Like the parallel Fibonacci heap, the concurrent priority pool scales linearly up to as many processors as tested. The priority queues are evaluated in terms of throughput and speedup. Some applications of concurrent priority queues such as the vertex cover problem and the single source shortest path problem are tested.« less
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
2017-01-01
Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263
Automated ILA design for synchronous sequential circuits
NASA Technical Reports Server (NTRS)
Liu, M. N.; Liu, K. Z.; Maki, G. K.; Whitaker, S. R.
1991-01-01
An iterative logic array (ILA) architecture for synchronous sequential circuits is presented. This technique utilizes linear algebra to produce the design equations. The ILA realization of synchronous sequential logic can be fully automated with a computer program. A programmable design procedure is proposed to fullfill the design task and layout generation. A software algorithm in the C language has been developed and tested to generate 1 micron CMOS layouts using the Hewlett-Packard FUNGEN module generator shell.
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Statechart-based design controllers for FPGA partial reconfiguration
NASA Astrophysics Data System (ADS)
Łabiak, Grzegorz; Wegrzyn, Marek; Rosado Muñoz, Alfredo
2015-09-01
Statechart diagram and UML technique can be a vital part of early conceptual modeling. At the present time there is no much support in hardware design methodologies for reconfiguration features of reprogrammable devices. Authors try to bridge the gap between imprecise UML model and formal HDL description. The key concept in author's proposal is to describe the behavior of the digital controller by statechart diagrams and to map some parts of the behavior into reprogrammable logic by means of group of states which forms sequential automaton. The whole process is illustrated by the example with experimental results.
Sequential quantum cloning under real-life conditions
NASA Astrophysics Data System (ADS)
Saberi, Hamed; Mardoukhi, Yousof
2012-05-01
We consider a sequential implementation of the optimal quantum cloning machine of Gisin and Massar and propose optimization protocols for experimental realization of such a quantum cloner subject to the real-life restrictions. We demonstrate how exploiting the matrix-product state (MPS) formalism and the ensuing variational optimization techniques reveals the intriguing algebraic structure of the Gisin-Massar output of the cloning procedure and brings about significant improvements to the optimality of the sequential cloning prescription of Delgado [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.98.150502 98, 150502 (2007)]. Our numerical results show that the orthodox paradigm of optimal quantum cloning can in practice be realized in a much more economical manner by utilizing a considerably lesser amount of informational and numerical resources than hitherto estimated. Instead of the previously predicted linear scaling of the required ancilla dimension D with the number of qubits n, our recipe allows a realization of such a sequential cloning setup with an experimentally manageable ancilla of dimension at most D=3 up to n=15 qubits. We also address satisfactorily the possibility of providing an optimal range of sequential ancilla-qubit interactions for optimal cloning of arbitrary states under realistic experimental circumstances when only a restricted class of such bipartite interactions can be engineered in practice.
Kuşçu, Özlem Selçuk; Sponza, Delia Teresa
2011-03-15
A sequential aerobic completely stirred tank reactor (CSTR) following the anaerobic migrating blanket reactor (AMBR) was used to treat a synthetic wastewater containing 2,4-dinitrotoluene (2,4-DNT). A Box-Wilson statistical experiment design was used to determine the effects of 2,4-DNT and the hydraulic retention times (HRTs) on 2,4-DNT and COD removal efficiencies in the AMBR reactor. The 2,4-DNT concentrations in the feed (0-280 mg/L) and the HRT (0.5-10 days) were considered as the independent variables while the 2,4-DNT and chemical oxygen demand (COD) removal efficiencies, total and methane gas productions, methane gas percentage, pH, total volatile fatty acid (TVFA) and total volatile fatty acid/bicarbonate alkalinity (TVFA/Bic.Alk.) ratio were considered as the objective functions in the Box-Wilson statistical experiment design in the AMBR. The predicted data for the parameters given above were determined from the response functions by regression analysis of the experimental data and exhibited excellent agreement with the experimental results. The optimum HRT which gave the maximum COD (97.00%) and 2,4-DNT removal (99.90%) efficiencies was between 5 and 10 days at influent 2,4-DNT concentrations 1-280 mg/L in the AMBR. The aerobic CSTR was used for removals of residual COD remaining from the AMBR, and for metabolites of 2,4-DNT. The maximum COD removal efficiency was 99% at an HRT of 1.89 days at a 2,4-DNT concentration of 239 mg/L in the aerobic CSTR. It was found that 280 mg/L 2,4-DNT transformed to 2,4-diaminotoluene (2,4-DAT) via 2-amino-4-nitrotoluene (2-A-4-NT) and 4-amino-2-nitrotoluene (4-A-2-NT) in the AMBR. The maximum 2,4-DAT removal was 82% at an HRT of 8.61 days in the aerobic CSTR. The maximum total COD and 2,4-DNT removal efficiencies were 99.00% and 99.99%, respectively, at an influent 2,4-DNT concentration of 239 mg/L and at 1.89 days of HRT in the sequential AMBR/CSTR. Copyright © 2011 Elsevier B.V. All rights reserved.
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Wagner, James; Schroeder, Heather M.; Piskorowski, Andrew; Ursano, Robert J.; Stein, Murray B.; Heeringa, Steven G.; Colpe, Lisa J.
2017-01-01
Mixed-mode surveys need to determine a number of design parameters that may have a strong influence on costs and errors. In a sequential mixed-mode design with web followed by telephone, one of these decisions is when to switch modes. The web mode is relatively inexpensive but produces lower response rates. The telephone mode complements the web mode in that it is relatively expensive but produces higher response rates. Among the potential negative consequences, delaying the switch from web to telephone may lead to lower response rates if the effectiveness of the prenotification contact materials is reduced by longer time lags, or if the additional e-mail reminders to complete the web survey annoy the sampled person. On the positive side, delaying the switch may decrease the costs of the survey. We evaluate these costs and errors by experimentally testing four different timings (1, 2, 3, or 4 weeks) for the mode switch in a web–telephone survey. This experiment was conducted on the fourth wave of a longitudinal study of the mental health of soldiers in the U.S. Army. We find that the different timings of the switch in the range of 1–4 weeks do not produce differences in final response rates or key estimates but longer delays before switching do lead to lower costs. PMID:28943717
Geng, Zongyu; Yang, Feng; Chen, Xi; Wu, Nianqiang
2016-01-01
It remains a challenge to accurately calibrate a sensor subject to environmental drift. The calibration task for such a sensor is to quantify the relationship between the sensor’s response and its exposure condition, which is specified by not only the analyte concentration but also the environmental factors such as temperature and humidity. This work developed a Gaussian Process (GP)-based procedure for the efficient calibration of sensors in drifting environments. Adopted as the calibration model, GP is not only able to capture the possibly nonlinear relationship between the sensor responses and the various exposure-condition factors, but also able to provide valid statistical inference for uncertainty quantification of the target estimates (e.g., the estimated analyte concentration of an unknown environment). Built on GP’s inference ability, an experimental design method was developed to achieve efficient sampling of calibration data in a batch sequential manner. The resulting calibration procedure, which integrates the GP-based modeling and experimental design, was applied on a simulated chemiresistor sensor to demonstrate its effectiveness and its efficiency over the traditional method. PMID:26924894
Experimental Design and Interpretation of Functional Neuroimaging Studies of Cognitive Processes
Caplan, David
2008-01-01
This article discusses how the relation between experimental and baseline conditions in functional neuroimaging studies affects the conclusions that can be drawn from a study about the neural correlates of components of the cognitive system and about the nature and organization of those components. I argue that certain designs in common use—in particular the contrast of qualitatively different representations that are processed at parallel stages of a functional architecture—can never identify the neural basis of a cognitive operation and have limited use in providing information about the nature of cognitive systems. Other types of designs—such as ones that contrast representations that are computed in immediately sequential processing steps and ones that contrast qualitatively similar representations that are parametrically related within a single processing stage—are more easily interpreted. PMID:17979122
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
Experimental Design and Primary Data Analysis Methods for Comparing Adaptive Interventions
Nahum-Shani, Inbal; Qian, Min; Almirall, Daniel; Pelham, William E.; Gnagy, Beth; Fabiano, Greg; Waxmonsky, Jim; Yu, Jihnhee; Murphy, Susan
2013-01-01
In recent years, research in the area of intervention development is shifting from the traditional fixed-intervention approach to adaptive interventions, which allow greater individualization and adaptation of intervention options (i.e., intervention type and/or dosage) over time. Adaptive interventions are operationalized via a sequence of decision rules that specify how intervention options should be adapted to an individual’s characteristics and changing needs, with the general aim to optimize the long-term effectiveness of the intervention. Here, we review adaptive interventions, discussing the potential contribution of this concept to research in the behavioral and social sciences. We then propose the sequential multiple assignment randomized trial (SMART), an experimental design useful for addressing research questions that inform the construction of high-quality adaptive interventions. To clarify the SMART approach and its advantages, we compare SMART with other experimental approaches. We also provide methods for analyzing data from SMART to address primary research questions that inform the construction of a high-quality adaptive intervention. PMID:23025433
Multi-Level Sequential Pattern Mining Based on Prime Encoding
NASA Astrophysics Data System (ADS)
Lianglei, Sun; Yun, Li; Jiang, Yin
Encoding is not only to express the hierarchical relationship, but also to facilitate the identification of the relationship between different levels, which will directly affect the efficiency of the algorithm in the area of mining the multi-level sequential pattern. In this paper, we prove that one step of division operation can decide the parent-child relationship between different levels by using prime encoding and present PMSM algorithm and CROSS-PMSM algorithm which are based on prime encoding for mining multi-level sequential pattern and cross-level sequential pattern respectively. Experimental results show that the algorithm can effectively extract multi-level and cross-level sequential pattern from the sequence database.
NASA Technical Reports Server (NTRS)
1974-01-01
The design and rationale of an advanced labeled release experiment based on single addition of soil and multiple sequential additions of media into each of four test chambers are outlined. The feasibility for multiple addition tests was established and various details of the methodology were studied. The four chamber battery of tests include: (1) determination of the effect of various atmospheric gases and selection of that gas which produces an optimum response; (2) determination of the effect of incubation temperature and selection of the optimum temperature for performing Martian biochemical tests; (3) sterile soil is dosed with a battery of C-14 labeled substrates and subjected to experimental temperature range; and (4) determination of the possible inhibitory effects of water on Martian organisms is performed initially by dosing with 0.01 ml and 0.5 ml of medium, respectively. A series of specifically labeled substrates are then added to obtain patterns in metabolic 14CO2 (C-14)O2 evolution.
Rigorously testing multialternative decision field theory against random utility models.
Berkowitsch, Nicolas A J; Scheibehenne, Benjamin; Rieskamp, Jörg
2014-06-01
Cognitive models of decision making aim to explain the process underlying observed choices. Here, we test a sequential sampling model of decision making, multialternative decision field theory (MDFT; Roe, Busemeyer, & Townsend, 2001), on empirical grounds and compare it against 2 established random utility models of choice: the probit and the logit model. Using a within-subject experimental design, participants in 2 studies repeatedly choose among sets of options (consumer products) described on several attributes. The results of Study 1 showed that all models predicted participants' choices equally well. In Study 2, in which the choice sets were explicitly designed to distinguish the models, MDFT had an advantage in predicting the observed choices. Study 2 further revealed the occurrence of multiple context effects within single participants, indicating an interdependent evaluation of choice options and correlations between different context effects. In sum, the results indicate that sequential sampling models can provide relevant insights into the cognitive process underlying preferential choices and thus can lead to better choice predictions. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Peterson, Kathryn M; Piazza, Cathleen C; Volkert, Valerie M
2016-09-01
Treatments of pediatric feeding disorders based on applied behavior analysis (ABA) have the most empirical support in the research literature (Volkert & Piazza, 2012); however, professionals often recommend, and caregivers often use, treatments that have limited empirical support. In the current investigation, we compared a modified sequential oral sensory approach (M-SOS; Benson, Parke, Gannon, & Muñoz, 2013) to an ABA approach for the treatment of the food selectivity of 6 children with autism. We randomly assigned 3 children to ABA and 3 children to M-SOS and compared the effects of treatment in a multiple baseline design across novel, healthy target foods. We used a multielement design to assess treatment generalization. Consumption of target foods increased for children who received ABA, but not for children who received M-SOS. We subsequently implemented ABA with the children for whom M-SOS was not effective and observed a potential treatment generalization effect during ABA when M-SOS preceded ABA. © 2016 Society for the Experimental Analysis of Behavior.
Rispin, Amy; Farrar, David; Margosches, Elizabeth; Gupta, Kailash; Stitzel, Katherine; Carr, Gregory; Greene, Michael; Meyer, William; McCall, Deborah
2002-01-01
The authors have developed an improved version of the up-and-down procedure (UDP) as one of the replacements for the traditional acute oral toxicity test formerly used by the Organisation for Economic Co-operation and Development member nations to characterize industrial chemicals, pesticides, and their mixtures. This method improves the performance of acute testing for applications that use the median lethal dose (classic LD50) test while achieving significant reductions in animal use. It uses sequential dosing, together with sophisticated computer-assisted computational methods during the execution and calculation phases of the test. Staircase design, a form of sequential test design, can be applied to acute toxicity testing with its binary experimental endpoints (yes/no outcomes). The improved UDP provides a point estimate of the LD50 and approximate confidence intervals in addition to observed toxic signs for the substance tested. It does not provide information about the dose-response curve. Computer simulation was used to test performance of the UDP without the need for additional laboratory validation.
A field trial of ethyl hexanediol against Aedes dorsalis in Sonoma County, California.
Rutledge, L C; Hooper, R L; Wirtz, R A; Gupta, R K
1989-09-01
The repellent ethyl hexanediol (2-ethyl-1,3-hexanediol) was tested against the mosquito Aedes dorsalis in a coastal salt marsh in California. The experimental design incorporated a linear regression model, sequential treatments and a proportional end point (95%) for protection time. The protection time of 0.10 mg/cm2 ethyl hexanediol was estimated at 0.8 h. This time is shorter than that obtained previously for deet (N,N-diethyl-3-methylbenzamide) against Ae. dorsalis (4.4 h).
Architecture for one-shot compressive imaging using computer-generated holograms.
Macfaden, Alexander J; Kindness, Stephen J; Wilkinson, Timothy D
2016-09-10
We propose a synchronous implementation of compressive imaging. This method is mathematically equivalent to prevailing sequential methods, but uses a static holographic optical element to create a spatially distributed spot array from which the image can be reconstructed with an instantaneous measurement. We present the holographic design requirements and demonstrate experimentally that the linear algebra of compressed imaging can be implemented with this technique. We believe this technique can be integrated with optical metasurfaces, which will allow the development of new compressive sensing methods.
A novel visual hardware behavioral language
NASA Technical Reports Server (NTRS)
Li, Xueqin; Cheng, H. D.
1992-01-01
Most hardware behavioral languages just use texts to describe the behavior of the desired hardware design. This is inconvenient for VLSI designers who enjoy using the schematic approach. The proposed visual hardware behavioral language has the ability to graphically express design information using visual parallel models (blocks), visual sequential models (processes) and visual data flow graphs (which consist of primitive operational icons, control icons, and Data and Synchro links). Thus, the proposed visual hardware behavioral language can not only specify hardware concurrent and sequential functionality, but can also visually expose parallelism, sequentiality, and disjointness (mutually exclusive operations) for the hardware designers. That would make the hardware designers capture the design ideas easily and explicitly using this visual hardware behavioral language.
ERIC Educational Resources Information Center
Yuvaci, Ibrahim; Demir, Selçuk Besir
2016-01-01
This paper is aimed to determine the relation between reading comprehension skill and TEOG success. In this research, a mixed research method, sequential explanatory mixed design, is utilized to examine the relation between reading comprehension skills and TEOG success of 8th grade students throughly. In explanatory sequential mixed design…
Nonvolatile reconfigurable sequential logic in a HfO2 resistive random access memory array.
Zhou, Ya-Xiong; Li, Yi; Su, Yu-Ting; Wang, Zhuo-Rui; Shih, Ling-Yi; Chang, Ting-Chang; Chang, Kuan-Chang; Long, Shi-Bing; Sze, Simon M; Miao, Xiang-Shui
2017-05-25
Resistive random access memory (RRAM) based reconfigurable logic provides a temporal programmable dimension to realize Boolean logic functions and is regarded as a promising route to build non-von Neumann computing architecture. In this work, a reconfigurable operation method is proposed to perform nonvolatile sequential logic in a HfO 2 -based RRAM array. Eight kinds of Boolean logic functions can be implemented within the same hardware fabrics. During the logic computing processes, the RRAM devices in an array are flexibly configured in a bipolar or complementary structure. The validity was demonstrated by experimentally implemented NAND and XOR logic functions and a theoretically designed 1-bit full adder. With the trade-off between temporal and spatial computing complexity, our method makes better use of limited computing resources, thus provides an attractive scheme for the construction of logic-in-memory systems.
Comparative evaluation of three shaft seals proposed for high performance turbomachinery
NASA Technical Reports Server (NTRS)
Hendricks, R. C.
1982-01-01
Experimental pressure profiles and leak rate characteristics for three shaft seal prototype model configurations proposed for the space shuttle turbopump were assessed in the concentric and fully eccentric, to point of rub, positions without the effects of rotation. The parallel-cylindrical configuration has moderate to good stiffness with a higher leak rate. It represents a simple concept, but for practical reasons and possible increases in stability, all such seals should be conical-convergent. The three-stepdown-sequential, parallel-cylindrical seal is converging and represents good to possible high stiffness when fluid separation occurs, with a significant decrease in leak rate. Such seals can be very effective. The three-stepdown-sequential labyrinth seal of 33-teeth (i.e., 12-11-10 teeth from inlet to exit) provides excellent leak control but usually has very poor stiffness, depending on cavity design. The seal is complex and not recommended for dynamic control.
Data Assimilation - Advances and Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Brian J.
2014-07-30
This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
An Intelligent Automation Platform for Rapid Bioprocess Design.
Wu, Tianyi; Zhou, Yuhong
2014-08-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user's inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. © 2013 Society for Laboratory Automation and Screening.
An Intelligent Automation Platform for Rapid Bioprocess Design
Wu, Tianyi
2014-01-01
Bioprocess development is very labor intensive, requiring many experiments to characterize each unit operation in the process sequence to achieve product safety and process efficiency. Recent advances in microscale biochemical engineering have led to automated experimentation. A process design workflow is implemented sequentially in which (1) a liquid-handling system performs high-throughput wet lab experiments, (2) standalone analysis devices detect the data, and (3) specific software is used for data analysis and experiment design given the user’s inputs. We report an intelligent automation platform that integrates these three activities to enhance the efficiency of such a workflow. A multiagent intelligent architecture has been developed incorporating agent communication to perform the tasks automatically. The key contribution of this work is the automation of data analysis and experiment design and also the ability to generate scripts to run the experiments automatically, allowing the elimination of human involvement. A first-generation prototype has been established and demonstrated through lysozyme precipitation process design. All procedures in the case study have been fully automated through an intelligent automation platform. The realization of automated data analysis and experiment design, and automated script programming for experimental procedures has the potential to increase lab productivity. PMID:24088579
Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.; ...
2017-02-07
Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Passage of American shad: paradigms and realities
Haro, Alex; Castro-Santos, Theodore
2012-01-01
Despite more than 250 years of development, the passage of American shad Alosa sapidissima at dams and other barriers frequently remains problematic. Few improvements in design based on knowledge of the swimming, schooling, and migratory behaviors of American shad have been incorporated into passage structures. Large-scale technical fishways designed for the passage of adult salmonids on the Columbia River have been presumed to have good performance for American shad but have never been rigorously evaluated for this species. Similar but smaller fishway designs on the East Coast frequently have poor performance. Provision of effective downstream passage for both juvenile and postspawning adult American shad has been given little consideration in most passage projects. Ways to attract and guide American shad to both fishway entrances and downstream bypasses remain marginally understood. The historical development of passage structures for American shad has resulted in assumptions and paradigms about American shad behavior and passage that are frequently unsubstantiated by supporting data or appropriate experimentation. We propose that many of these assumptions and paradigms are either unfounded or invalid and that significant improvements to American shad upstream and downstream passage can be made via a sequential program of behavioral experimentation, application of experimental results to the physical and hydraulic design of new structures, and controlled tests of large-scale prototype structures in the laboratory and field.
Physical versus Virtual Manipulative Experimentation in Physics Learning
ERIC Educational Resources Information Center
Zacharia, Zacharias C.; Olympiou, Georgios
2011-01-01
The aim of this study was to investigate whether physical or virtual manipulative experimentation can differentiate physics learning. There were four experimental conditions, namely Physical Manipulative Experimentation (PME), Virtual Manipulative Experimentation (VME), and two sequential combinations of PME and VME, as well as a control condition…
Optimal sequential measurements for bipartite state discrimination
NASA Astrophysics Data System (ADS)
Croke, Sarah; Barnett, Stephen M.; Weir, Graeme
2017-05-01
State discrimination is a useful test problem with which to clarify the power and limitations of different classes of measurement. We consider the problem of discriminating between given states of a bipartite quantum system via sequential measurement of the subsystems, with classical feed-forward of measurement results. Our aim is to understand when sequential measurements, which are relatively easy to implement experimentally, perform as well, or almost as well, as optimal joint measurements, which are in general more technologically challenging. We construct conditions that the optimal sequential measurement must satisfy, analogous to the well-known Helstrom conditions for minimum error discrimination in the unrestricted case. We give several examples and compare the optimal probability of correctly identifying the state via global versus sequential measurement strategies.
Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Essen, Jonathan; Ruiz-Garcia, Miguel; Jenkins, Ian; Carretero, Manuel; Bonilla, Luis L.; Birnir, Björn
2018-04-01
We explore the design parameter space of short (5-25 period), n-doped, Ga/(Al,Ga)As semiconductor superlattices (SSLs) in the sequential resonant tunneling regime. We consider SSLs at cool (77 K) and warm (295 K) temperatures, simulating the electronic response to variations in (a) the number of SSL periods, (b) the contact conductivity, and (c) the strength of disorder (aperiodicities). Our analysis shows that the chaotic dynamical phases exist on a number of sub-manifolds of codimension zero within the design parameter space. This result provides an encouraging guide towards the experimental observation of high-frequency intrinsic dynamical chaos in shorter SSLs.
The use of clinical trials in comparative effectiveness research on mental health
Blanco, Carlos; Rafful, Claudia; Olfson, Mark
2013-01-01
Objectives A large body of research on comparative effectiveness research (CER) focuses on the use of observational and quasi-experimental approaches. We sought to examine the use of clinical trials as a tool for CER, particularly in mental health. Study Design and Setting Examination of three ongoing randomized clinical trials in psychiatry that address issues which would pose difficulties for non-experimental CER methods. Results Existing statistical approaches to non-experimental data appear insufficient to compensate for biases that may arise when the pattern of missing data cannot be properly modeled such as when there are no standards for treatment, when affected populations have limited access to treatment, or when there are high rates of treatment dropout. Conclusions Clinical trials should retain an important role in CER, particularly in cases of high disorder prevalence, large expected effect sizes, difficult to reach populations or when examining sequential treatments or stepped-care algorithms. Progress in CER in mental health will require careful consideration of appropriate selection between clinical trials and non-experimental designs and on allocation of research resources to optimally inform key treatment decisions for each individual patient. PMID:23849150
Delay test generation for synchronous sequential circuits
NASA Astrophysics Data System (ADS)
Devadas, Srinivas
1989-05-01
We address the problem of generating tests for delay faults in non-scan synchronous sequential circuits. Delay test generation for sequential circuits is a considerably more difficult problem than delay testing of combinational circuits and has received much less attention. In this paper, we present a method for generating test sequences to detect delay faults in sequential circuits using the stuck-at fault sequential test generator STALLION. The method is complete in that it will generate a delay test sequence for a targeted fault given sufficient CPU time, if such a sequence exists. We term faults for which no delay test sequence exists, under out test methodology, sequentially delay redundant. We describe means of eliminating sequential delay redundancies in logic circuits. We present a partial-scan methodology for enhancing the testability of difficult-to-test of untestable sequential circuits, wherein a small number of flip-flops are selected and made controllable/observable. The selection process guarantees the elimination of all sequential delay redundancies. We show that an intimate relationship exists between state assignment and delay testability of a sequential machine. We describe a state assignment algorithm for the synthesis of sequential machines with maximal delay fault testability. Preliminary experimental results using the test generation, partial-scan and synthesis algorithm are presented.
Workshop on the Thermophysical Properties of Molten Materials
NASA Technical Reports Server (NTRS)
1993-01-01
The role of accurate thermophysical property data in the process design and modeling of solidification processes was the subject of a workshop held on 22-23 Oct. 1992 in Cleveland, Ohio. The workshop was divided into three sequential sessions dealing with (1) industrial needs and priorities for thermophysical data, (2) experimental capabilities for measuring the necessary data, and (3) theoretical capabilities for predicting the necessary data. In addition, a 2-hour panel discussion of the salient issues was featured as well as a 2-hour caucus that assessed priorities and identified action plans.
Ion beam-based studies for tribological phenomena
NASA Astrophysics Data System (ADS)
Racolta, P. M.; Popa-Simil, L.; Alexandreanu, B.
1996-06-01
Custom-designed experiments based on the Thin Layer Activation technique (TLA) were completed, providing information on the wear level of some engine components with additional data on transfer and adhesion of material between metallic friction couples using the RBS method. RBS experimental results concerning material transfer for a steel-brass friction couple are presented and discussed in the paper. Also, the types and concentrations of the wear products in used lubricant oils were determined by in-air PIXE. A sequential lubricant filtering-based procedure for determining the dimension distribution of the resulting radioactive wear particles by low level γ-spectrometry is presented. Experimental XRF spectra showing the non-homogeneous distribution of the retained waste particles on the filtering paper are shown.
Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-07-01
The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.
Correlated sequential tunneling in Tomonaga-Luttinger liquid quantum dots
NASA Astrophysics Data System (ADS)
Thorwart, M.; Egger, R.; Grifoni, M.
2005-02-01
We investigate tunneling through a quantum dot formed by two strong impurites in a spinless Tomonaga-Luttinger liquid. Upon employing a Markovian master equation approach, we compute the linear conductance due to sequential tunneling processes. Besides the previously used lowest-order Golden Rule rates describing uncorrelated sequential tunneling (UST) processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects are shown to dominate over UST. Focusing mainly on the temperature dependence of the conductance maximum, we discuss the relation of our results to previous theoretical and experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faye, Sherry A.; Richards, Jason M.; Gallardo, Athena M.
Sequential extraction is a useful technique for assessing the potential to leach actinides from soils; however, current literature lacks uniformity in experimental details, making direct comparison of results impossible. This work continued development toward a standardized five-step sequential extraction protocol by analyzing extraction behaviors of 232Th, 238U, 239,240Pu and 241Am from lake and ocean sediment reference materials. Results produced a standardized procedure after creating more defined reaction conditions to improve method repeatability. A NaOH fusion procedure is recommended following sequential leaching for the complete dissolution of insoluble species.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
Experimental designs for detecting synergy and antagonism between two drugs in a pre-clinical study.
Sperrin, Matthew; Thygesen, Helene; Su, Ting-Li; Harbron, Chris; Whitehead, Anne
2015-01-01
The identification of synergistic interactions between combinations of drugs is an important area within drug discovery and development. Pre-clinically, large numbers of screening studies to identify synergistic pairs of compounds can often be ran, necessitating efficient and robust experimental designs. We consider experimental designs for detecting interaction between two drugs in a pre-clinical in vitro assay in the presence of uncertainty of the monotherapy response. The monotherapies are assumed to follow the Hill equation with common lower and upper asymptotes, and a common variance. The optimality criterion used is the variance of the interaction parameter. We focus on ray designs and investigate two algorithms for selecting the optimum set of dose combinations. The first is a forward algorithm in which design points are added sequentially. This is found to give useful solutions in simple cases but can lack robustness when knowledge about the monotherapy parameters is insufficient. The second algorithm is a more pragmatic approach where the design points are constrained to be distributed log-normally along the rays and monotherapy doses. We find that the pragmatic algorithm is more stable than the forward algorithm, and even when the forward algorithm has converged, the pragmatic algorithm can still out-perform it. Practically, we find that good designs for detecting an interaction have equal numbers of points on monotherapies and combination therapies, with those points typically placed in positions where a 50% response is expected. More uncertainty in monotherapy parameters leads to an optimal design with design points that are more spread out. Copyright © 2015 John Wiley & Sons, Ltd.
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines.
Biedka, Mathew M; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-06
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency.
Ultra-Wide Band Non-reciprocity through Sequentially-Switched Delay Lines
Biedka, Mathew M.; Zhu, Rui; Xu, Qiang Mark; Wang, Yuanxun Ethan
2017-01-01
Achieving non-reciprocity through unconventional methods without the use of magnetic material has recently become a subject of great interest. Towards this goal a time switching strategy known as the Sequentially-Switched Delay Line (SSDL) is proposed. The essential SSDL configuration consists of six transmission lines of equal length, along with five switches. Each switch is turned on and off sequentially to distribute and route the propagating electromagnetic wave, allowing for simultaneous transmission and receiving of signals through the device. Preliminary experimental results with commercial off the shelf parts are presented which demonstrated non-reciprocal behavior with greater than 40 dB isolation from 200 KHz to 200 MHz. The theory and experimental results demonstrated that the SSDL concept may lead to future on-chip circulators over multi-octaves of frequency. PMID:28059132
Structural Optimization of a Force Balance Using a Computational Experiment Design
NASA Technical Reports Server (NTRS)
Parker, P. A.; DeLoach, R.
2002-01-01
This paper proposes a new approach to force balance structural optimization featuring a computational experiment design. Currently, this multi-dimensional design process requires the designer to perform a simplification by executing parameter studies on a small subset of design variables. This one-factor-at-a-time approach varies a single variable while holding all others at a constant level. Consequently, subtle interactions among the design variables, which can be exploited to achieve the design objectives, are undetected. The proposed method combines Modern Design of Experiments techniques to direct the exploration of the multi-dimensional design space, and a finite element analysis code to generate the experimental data. To efficiently search for an optimum combination of design variables and minimize the computational resources, a sequential design strategy was employed. Experimental results from the optimization of a non-traditional force balance measurement section are presented. An approach to overcome the unique problems associated with the simultaneous optimization of multiple response criteria is described. A quantitative single-point design procedure that reflects the designer's subjective impression of the relative importance of various design objectives, and a graphical multi-response optimization procedure that provides further insights into available tradeoffs among competing design objectives are illustrated. The proposed method enhances the intuition and experience of the designer by providing new perspectives on the relationships between the design variables and the competing design objectives providing a systematic foundation for advancements in structural design.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzouk, Youssef
Predictive simulation of complex physical systems increasingly rests on the interplay of experimental observations with computational models. Key inputs, parameters, or structural aspects of models may be incomplete or unknown, and must be developed from indirect and limited observations. At the same time, quantified uncertainties are needed to qualify computational predictions in the support of design and decision-making. In this context, Bayesian statistics provides a foundation for inference from noisy and limited data, but at prohibitive computional expense. This project intends to make rigorous predictive modeling *feasible* in complex physical systems, via accelerated and scalable tools for uncertainty quantification, Bayesianmore » inference, and experimental design. Specific objectives are as follows: 1. Develop adaptive posterior approximations and dimensionality reduction approaches for Bayesian inference in high-dimensional nonlinear systems. 2. Extend accelerated Bayesian methodologies to large-scale {\\em sequential} data assimilation, fully treating nonlinear models and non-Gaussian state and parameter distributions. 3. Devise efficient surrogate-based methods for Bayesian model selection and the learning of model structure. 4. Develop scalable simulation/optimization approaches to nonlinear Bayesian experimental design, for both parameter inference and model selection. 5. Demonstrate these inferential tools on chemical kinetic models in reacting flow, constructing and refining thermochemical and electrochemical models from limited data. Demonstrate Bayesian filtering on canonical stochastic PDEs and in the dynamic estimation of inhomogeneous subsurface properties and flow fields.« less
Trial Sequential Methods for Meta-Analysis
ERIC Educational Resources Information Center
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
Implementation of Temperature Sequential Controller on Variable Speed Drive
NASA Astrophysics Data System (ADS)
Cheong, Z. X.; Barsoum, N. N.
2008-10-01
There are many pump and motor installations with quite extensive speed variation, such as Sago conveyor, heating, ventilation and air conditioning (HVAC) and water pumping system. A common solution for these applications is to run several fixed speed motors in parallel, with flow control accomplish by turning the motors on and off. This type of control method causes high in-rush current, and adds a risk of damage caused by pressure transients. This paper explains the design and implementation of a temperature speed control system for use in industrial and commercial sectors. Advanced temperature speed control can be achieved by using ABB ACS800 variable speed drive-direct torque sequential control macro, programmable logic controller and temperature transmitter. The principle of direct torque sequential control macro (DTC-SC) is based on the control of torque and flux utilizing the stator flux field orientation over seven preset constant speed. As a result of continuous comparison of ambient temperature to the references temperatures; electromagnetic torque response is particularly fast to the motor state and it is able maintain constant speeds. Experimental tests have been carried out by using ABB ACS800-U1-0003-2, to validate the effectiveness and dynamic respond of ABB ACS800 against temperature variation, loads, and mechanical shocks.
2013-08-01
in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey
Wells, Gary L; Steblay, Nancy K; Dysart, Jennifer E
2015-02-01
Eyewitnesses (494) to actual crimes in 4 police jurisdictions were randomly assigned to view simultaneous or sequential photo lineups using laptop computers and double-blind administration. The sequential procedure used in the field experiment mimicked how it is conducted in actual practice (e.g., using a continuation rule, witness does not know how many photos are to be viewed, witnesses resolve any multiple identifications), which is not how most lab experiments have tested the sequential lineup. No significant differences emerged in rates of identifying lineup suspects (25% overall) but the sequential procedure produced a significantly lower rate (11%) of identifying known-innocent lineup fillers than did the simultaneous procedure (18%). The simultaneous/sequential pattern did not significantly interact with estimator variables and no lineup-position effects were observed for either the simultaneous or sequential procedures. Rates of nonidentification were not significantly different for simultaneous and sequential but nonidentifiers from the sequential procedure were more likely to use the "not sure" response option than were nonidentifiers from the simultaneous procedure. Among witnesses who made an identification, 36% (41% of simultaneous and 32% of sequential) identified a known-innocent filler rather than a suspect, indicating that eyewitness performance overall was very poor. The results suggest that the sequential procedure that is used in the field reduces the identification of known-innocent fillers, but the differences are relatively small.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
Yamany, Abeer; Hamdy, Bassant
2016-01-01
[Purpose] The aim of this study was to investigate the effects of sequential pneumatic compression therapy on venous blood flow, refilling time, pain level, and quality of life in women with varicose veins. [Subjects and Methods] Twenty-eight females with varicose veins were selected and randomly allocated to a control group, and experimental group. Maximum and mean venous blood velocities, the refilling time, pain by visual analog scale and quality of life by Aberdeen Varicose Veins Questionnaire were measured in all patients before and after six weeks of treatment. Both groups received lower extremity exercises; in addition, patients in the experimental group received sequential pneumatic compression therapy for 30 minutes daily, five days a week for six weeks. [Results] All measured parameters improved significantly in both groups, comparison of post treatment measurements between groups showed that the maximum and mean blood flow velocity, the pain level, and quality of life were significantly higher in the experimental group compared with the control group. On the other hand there was no significant difference between groups for refilling time. [Conclusion] Sequential pneumatic compression therapy with the applied parameters was an effective modality for increasing venous blood flow, reducing pain, and improving quality of women life with varicose veins. PMID:27512247
Experimental and theoretical investigation of relative optical band gaps in graphene generations
NASA Astrophysics Data System (ADS)
Bhatnagar, Deepika; Singh, Sukhbir; Yadav, Sriniwas; Kumar, Ashok; Kaur, Inderpreet
2017-01-01
Size and chemical functionalization dependant optical band gaps in graphene family nanomaterials were investigated by experimental and theoretical study using Tauc plot and density functional theory (DFT). We have synthesized graphene oxide through a modified Hummer’s method using graphene nanoplatelets and sequentially graphene quantum dots through hydrothermal reduction. The experimental results indicate that the optical band gap in graphene generations was altered by reducing the size of graphene sheets and attachment of chemical functionalities like epoxy, hydroxyl and carboxyl groups plays a crucial role in varying optical band gaps. It is further confirmed by DFT calculations that the π orbitals were more dominatingly participating in transitions shown by projected density of states and the molecular energy spectrum represented the effect of attached functional groups along with discreteness in energy levels. Theoretical results were found to be in good agreement with experimental results. All of the above different variants of graphene can be used in native or modified form for sensor design and optoelectronic applications.
Song, Zhen-Tao; Zhu, Ming-Jun
2017-03-01
Fermentation of herb Polygonum hydropiper L. (PHL) and cassava pulp (CP) for feed additive production with simultaneous flavonoid dissolution was investigated, and a two-stage response surface methodology (RSM) based on Plackett-Burman factorial design (PB design) was used to optimize the flavonoid dissolution and protein content. Using the screening function of PB design, four different significant factors for the two response variables were acquired: factors A (CP) and B (PHL) for the flavonoid dissolution versus factors G (inoculum size) and H (fermentation time) for protein content. Then, two RSMs were used sequentially to improve the values of the two response variables separately. The mutual corroboration of the experimental results in the present study confirmed the validity of the associated experimental design. The validation experiment showed a flavonoid dissolution rate of 94.00%, and a protein content of 18.20%, gaining an increase in 21.20% and 199.10% over the control, respectively. The present study confirms the feasibility of feed additive production by Saccharomyces cerevisiae with CP and PHL and simultaneous optimization of flavonoid dissolution and protein content using a two-stage RSM. © 2016 International Union of Biochemistry and Molecular Biology, Inc.
Karst, Daniel J; Scibona, Ernesto; Serra, Elisa; Bielser, Jean-Marc; Souquet, Jonathan; Stettler, Matthieu; Broly, Hervé; Soos, Miroslav; Morbidelli, Massimo; Villiger, Thomas K
2017-09-01
Mammalian cell perfusion cultures are gaining renewed interest as an alternative to traditional fed-batch processes for the production of therapeutic proteins, such as monoclonal antibodies (mAb). The steady state operation at high viable cell density allows the continuous delivery of antibody product with increased space-time yield and reduced in-process variability of critical product quality attributes (CQA). In particular, the production of a confined mAb N-linked glycosylation pattern has the potential to increase therapeutic efficacy and bioactivity. In this study, we show that accurate control of flow rates, media composition and cell density of a Chinese hamster ovary (CHO) cell perfusion bioreactor allowed the production of a constant glycosylation profile for over 20 days. Steady state was reached after an initial transition phase of 6 days required for the stabilization of extra- and intracellular processes. The possibility to modulate the glycosylation profile was further investigated in a Design of Experiment (DoE), at different viable cell density and media supplement concentrations. This strategy was implemented in a sequential screening approach, where various steady states were achieved sequentially during one culture. It was found that, whereas high ammonia levels reached at high viable cell densities (VCD) values inhibited the processing to complex glycan structures, the supplementation of either galactose, or manganese as well as their synergy significantly increased the proportion of complex forms. The obtained experimental data set was used to compare the reliability of a statistical response surface model (RSM) to a mechanistic model of N-linked glycosylation. The latter outperformed the response surface predictions with respect to its capability and reliability in predicting the system behavior (i.e., glycosylation pattern) outside the experimental space covered by the DoE design used for the model parameter estimation. Therefore, we can conclude that the modulation of glycosylation in a sequential steady state approach in combination with mechanistic model represents an efficient and rational strategy to develop continuous processes with desired N-linked glycosylation patterns. Biotechnol. Bioeng. 2017;114: 1978-1990. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Sequential color video to parallel color video converter
NASA Technical Reports Server (NTRS)
1975-01-01
The engineering design, development, breadboard fabrication, test, and delivery of a breadboard field sequential color video to parallel color video converter is described. The converter was designed for use onboard a manned space vehicle to eliminate a flickering TV display picture and to reduce the weight and bulk of previous ground conversion systems.
He, Longwei; Yang, Xueling; Xu, Kaixin; Kong, Xiuqi
2017-01-01
Biothiols, which have a close network of generation and metabolic pathways among them, are essential reactive sulfur species (RSS) in the cells and play vital roles in human physiology. However, biothiols possess highly similar chemical structures and properties, resulting in it being an enormous challenge to simultaneously discriminate them from each other. Herein, we develop a unique fluorescent probe (HMN) for not only simultaneously distinguishing Cys/Hcy, GSH, and H2S from each other, but also sequentially sensing Cys/Hcy/GSH and H2S using a multi-channel fluorescence mode for the first time. When responding to the respective biothiols, the robust probe exhibits multiple sets of fluorescence signals at three distinct emission bands (blue-green-red). The new probe can also sense H2S at different concentration levels with changes of fluorescence at the blue and red emission bands. In addition, the novel probe HMN is able to discriminate and sequentially sense biothiols in biological environments via three-color fluorescence imaging. We expect that the development of the robust probe HMN will provide a powerful strategy to design fluorescent probes for the discrimination and sequential detection of biothiols, and offer a promising tool for exploring the interrelated roles of biothiols in various physiological and pathological conditions. PMID:28989659
ERIC Educational Resources Information Center
Boekkooi-Timminga, Ellen
Nine methods for automated test construction are described. All are based on the concepts of information from item response theory. Two general kinds of methods for the construction of parallel tests are presented: (1) sequential test design; and (2) simultaneous test design. Sequential design implies that the tests are constructed one after the…
Topics in the Sequential Design of Experiments
1992-03-01
decision , unless so designated by other documentation. 12a. DISTRIBUTION /AVAILABIIUTY STATEMENT 12b. DISTRIBUTION CODE Approved for public release...3 0 1992 D 14. SUBJECT TERMS 15. NUMBER OF PAGES12 Design of Experiments, Renewal Theory , Sequential Testing 1 2. PRICE CODE Limit Theory , Local...distributions for one parameter exponential families," by Michael Woodroofe. Stntca, 2 (1991), 91-112. [6] "A non linear renewal theory for a functional of
MSFC Skylab airlock module, volume 1. [systems design and performance
NASA Technical Reports Server (NTRS)
1974-01-01
The history and development of the Skylab Airlock Module and Payload Shroud is presented from initial concept through final design. A summary is given of the Airlock features and systems. System design and performance are presented for the Spent Stage Experiment Support Module, structure and mechanical systems, mass properties, thermal and environmental control systems, EVA/IVA suite system, electrical power system, sequential system, sequential system, and instrumentation system.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
An in vivo model for evaluating the response of pulp to various biomaterials.
McClugage, S G; Holmstedt, J O; Malloy, R B
1980-09-01
An in vivo model has been designed to study the acute response of exposed or unexposed dental pulp to the topical application of various biomaterials. This model permits sequential microscopic observations of the microvascular system of dental pulp before and after application of pulp capping agents, cementing agents, or cavity liners. The use of this experimental model provides useful information related to the tolerability of dental pulp to various biomaterials used in dentistry. Furthermore, this model serves as a useful supplement to more traditional long term methods for evaluating the biocompatability of dental materials.
Numerical Modeling of Electrode Degradation During Resistance Spot Welding Using CuCrZr Electrodes
NASA Astrophysics Data System (ADS)
Gauthier, Elise; Carron, Denis; Rogeon, Philippe; Pilvin, Philippe; Pouvreau, Cédric; Lety, Thomas; Primaux, François
2014-05-01
Resistance spot welding is a technique widely used by the automotive industry to assemble thin steel sheets. The cyclic thermo-mechanical loading associated with the accumulation of weld spots progressively deteriorates the electrodes. This study addresses the development of a comprehensive multi-physical model that describes the sequential deterioration. Welding tests achieved on uncoated and Zn-coated steel sheets are analyzed. Finite element analysis is performed using an electrical-thermal-metallurgical model. A numerical experimental design is carried out to highlight the main process parameters and boundary conditions which affect electrode degradation.
Analysis of SET pulses propagation probabilities in sequential circuits
NASA Astrophysics Data System (ADS)
Cai, Shuo; Yu, Fei; Yang, Yiqun
2018-05-01
As the feature size of CMOS transistors scales down, single event transient (SET) has been an important consideration in designing logic circuits. Many researches have been done in analyzing the impact of SET. However, it is difficult to consider numerous factors. We present a new approach for analyzing the SET pulses propagation probabilities (SPPs). It considers all masking effects and uses SET pulses propagation probabilities matrices (SPPMs) to represent the SPPs in current cycle. Based on the matrix union operations, the SPPs in consecutive cycles can be calculated. Experimental results show that our approach is practicable and efficient.
Lineup Composition, Suspect Position, and the Sequential Lineup Advantage
ERIC Educational Resources Information Center
Carlson, Curt A.; Gronlund, Scott D.; Clark, Steven E.
2008-01-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate…
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Evaluating Bias of Sequential Mixed-Mode Designs against Benchmark Surveys
ERIC Educational Resources Information Center
Klausch, Thomas; Schouten, Barry; Hox, Joop J.
2017-01-01
This study evaluated three types of bias--total, measurement, and selection bias (SB)--in three sequential mixed-mode designs of the Dutch Crime Victimization Survey: telephone, mail, and web, where nonrespondents were followed up face-to-face (F2F). In the absence of true scores, all biases were estimated as mode effects against two different…
C-quence: a tool for analyzing qualitative sequential data.
Duncan, Starkey; Collier, Nicholson T
2002-02-01
C-quence is a software application that matches sequential patterns of qualitative data specified by the user and calculates the rate of occurrence of these patterns in a data set. Although it was designed to facilitate analyses of face-to-face interaction, it is applicable to any data set involving categorical data and sequential information. C-quence queries are constructed using a graphical user interface. The program does not limit the complexity of the sequential patterns specified by the user.
Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia
2014-01-01
Abstract Objective: An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Background data: Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. Methods: The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. Results: The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Conclusions: Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening. PMID:25244081
Nicoletti, Giovanni; Cornaglia, Antonia Icaro; Faga, Angela; Scevola, Silvia
2014-10-01
An experimental study was conducted to assess the effectiveness and safety of an innovative quadripolar variable electrode configuration radiofrequency device with objective measurements in an ex vivo and in vivo human experimental model. Nonablative radiofrequency applications are well-established anti-ageing procedures for cosmetic skin tightening. The study was performed in two steps: ex vivo and in vivo assessments. In the ex vivo assessments the radiofrequency applications were performed on human full-thickness skin and subcutaneous tissue specimens harvested during surgery for body contouring. In the in vivo assessments the applications were performed on two volunteer patients scheduled for body contouring surgery at the end of the study. The assessment methods were: clinical examination and medical photography, temperature measurement with thermal imaging scan, and light microscopy histological examination. The ex vivo assessments allowed for identification of the effective safety range for human application. The in vivo assessments allowed for demonstration of the biological effects of sequential radiofrequency applications. After a course of radiofrequency applications, the collagen fibers underwent an immediate heat-induced rearrangement and were partially denaturated and progressively metabolized by the macrophages. An overall thickening and spatial rearrangement was appreciated both in the collagen and elastic fibers, the latter displaying a juvenile reticular pattern. A late onset in the macrophage activation after sequential radiofrequency applications was appreciated. Our data confirm the effectiveness of sequential radiofrequency applications in obtaining attenuation of the skin wrinkles by an overall skin tightening.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Liu, Sonia Y; Chrystal, Peter V; Cowieson, Aaron J; Truong, Ha H; Moss, Amy F; Selle, Peter H
2017-01-01
A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05) when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05). However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027) and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042) than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P < 0.0001) than low energy (197 g/bird) or medium energy diets (663 g/bird) whilst broiler chickens were offered diets with different energy densities but high crude protein (300 g/kg) or digestible lysine (17.5 g/kg) concentrations. Choice fed birds had lower FCR (1.217 versus 1.327 g/g, P < 0.0001) and higher carcass yield (88.1 versus 87.3%, P = 0.012) than sequentially fed birds. This suggests that the dietary balance between protein and energy is essential for optimal feed conversion efficiency. The intake path of macronutrients from 10-31 days in choice and sequential feeding groups were plotted and compared with the null path if broiler chickens selected equal amounts of the three diets in the combination. Regardless of feeding regimen, the intake paths of starch and protein are very close to the null path; however, lipid and protein intake paths in choice fed birds are father from the null path than sequentially fed birds.
Chrystal, Peter V.; Cowieson, Aaron J.; Truong, Ha H.; Moss, Amy F.; Selle, Peter H.
2017-01-01
A total of 360 male Ross 308 broiler chickens were used in a feeding study to assess the influence of macronutrients and energy density on feed intakes from 10 to 31 days post-hatch. The study comprised ten dietary treatments from five dietary combinations and two feeding approaches: sequential and choice feeding. The study included eight experimental diets and each dietary combination was made from three experimental diets. Choice fed birds selected between three diets in separate feed trays at the same time; whereas the three diets were offered to sequentially fed birds on an alternate basis during the experimental period. There were no differences between starch and protein intakes between choice and sequentially fed birds (P > 0.05) when broiler chickens selected between diets with different starch, protein and lipid concentrations. When broiler chickens selected between diets with different starch and protein but similar lipid concentrations, both sequentially and choice fed birds selected similar ratios of starch and protein intake (P > 0.05). However, when broiler chickens selected from diets with different protein and lipid but similar starch concentrations, choice fed birds had higher lipid intake (129 versus 118 g/bird, P = 0.027) and selected diets with lower protein concentrations (258 versus 281 g/kg, P = 0.042) than birds offered sequential diet options. Choice fed birds had greater intakes of the high energy diet (1471 g/bird, P < 0.0001) than low energy (197 g/bird) or medium energy diets (663 g/bird) whilst broiler chickens were offered diets with different energy densities but high crude protein (300 g/kg) or digestible lysine (17.5 g/kg) concentrations. Choice fed birds had lower FCR (1.217 versus 1.327 g/g, P < 0.0001) and higher carcass yield (88.1 versus 87.3%, P = 0.012) than sequentially fed birds. This suggests that the dietary balance between protein and energy is essential for optimal feed conversion efficiency. The intake path of macronutrients from 10–31 days in choice and sequential feeding groups were plotted and compared with the null path if broiler chickens selected equal amounts of the three diets in the combination. Regardless of feeding regimen, the intake paths of starch and protein are very close to the null path; however, lipid and protein intake paths in choice fed birds are father from the null path than sequentially fed birds. PMID:29053729
Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie
2016-01-01
The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.
Sequential, progressive, equal-power, reflective beam-splitter arrays
NASA Astrophysics Data System (ADS)
Manhart, Paul K.
2017-11-01
The equations to calculate equal-power reflectivity of a sequential series of beam splitters is presented. Non-sequential optical design examples are offered for uniform illumination using diode lasers. Objects created using Boolean operators and Swept Surfaces can create objects capable of reflecting light into predefined elevation and azimuth angles. Analysis of the illumination patterns for the array are also presented.
ERIC Educational Resources Information Center
Peters, Richard
A model for Continuous-Integrated-Sequential (C/I/S) curricula for social studies education is presented. The design advocated involves ensuring continuity of instruction from grades K-12, an integration of social studies disciplines, and a sequential process of refining and reinforcing concept and skills from grade-to-grade along the K-12…
Lin, Kunning; Ma, Jian; Yuan, Dongxing; Feng, Sichao; Su, Haitao; Huang, Yongming; Shangguan, Qipei
2017-05-15
An integrated system was developed for automatic and sequential determination of NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ in natural waters based on reverse flow injection analysis combined with spectrophotometric detection. The system operation was controlled by a single chip microcomputer and laboratory-programmed software written in LabVIEW. The experimental parameters for each nutrient element analysis were optimized based on a univariate experimental design, and interferences from common ions were evaluated. The upper limits of the linear range (along with detection limit, µmolL -1 ) of the proposed method was 20 (0.03), 200 (0.7), 12 (0.3), 5 (0.03), 5 (0.03), 9 (0.2) µmolL -1 , for NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ , respectively. The relative standard deviations were below 5% (n=9-13) and the recoveries varied from 88.0±1.0% to 104.5±1.0% for spiked water samples. The sample throughput was about 20h -1 . This system has been successfully applied for the determination of multi-nutrient elements in different kinds of water samples and showed good agreement with reference methods (slope 1.0260±0.0043, R 2 =0.9991, n=50). Copyright © 2017 Elsevier B.V. All rights reserved.
Deformation behavior and mechanical analysis of vertically aligned carbon nanotube (VACNT) bundles
NASA Astrophysics Data System (ADS)
Hutchens, Shelby B.
Vertically aligned carbon nanotubes (VACNTs) serve as integral components in a variety of applications including MEMS devices, energy absorbing materials, dry adhesives, light absorbing coatings, and electron emitters, all of which require structural robustness. It is only through an understanding of VACNT's structural mechanical response and local constitutive stress-strain relationship that future advancements through rational design may take place. Even for applications in which the structural response is not central to device performance, VACNTs must be sufficiently robust and therefore knowledge of their microstructure-property relationship is essential. This thesis first describes the results of in situ uniaxial compression experiments of 50 micron diameter cylindrical bundles of these complex, hierarchical materials as they undergo unusual deformation behavior. Most notably they deform via a series of localized folding events, originating near the bundle base, which propagate laterally and collapse sequentially from bottom to top. This deformation mechanism accompanies an overall foam-like stress-strain response having elastic, plateau, and densification regimes with the addition of undulations in the stress throughout the plateau regime that correspond to the sequential folding events. Microstructural observations indicate the presence of a strength gradient, due to a gradient in both tube density and alignment along the bundle height, which is found to play a key role in both the sequential deformation process and the overall stress-strain response. Using the complicated structural response as both motivation and confirmation, a finite element model based on a viscoplastic solid is proposed. This model is characterized by a flow stress relation that contains an initial peak followed by strong softening and successive hardening. Analysis of this constitutive relation results in capture of the sequential buckling phenomenon and a strength gradient effect. This combination of experimental and modeling approaches motivates discussion of the particular microstructural mechanisms and local material behavior that govern the non-trivial energy absorption via sequential, localized buckle formation in the VACNT bundles.
ChIP-re-ChIP: Co-occupancy Analysis by Sequential Chromatin Immunoprecipitation.
Beischlag, Timothy V; Prefontaine, Gratien G; Hankinson, Oliver
2018-01-01
Chromatin immunoprecipitation (ChIP) exploits the specific interactions between DNA and DNA-associated proteins. It can be used to examine a wide range of experimental parameters. A number of proteins bound at the same genomic location can identify a multi-protein chromatin complex where several proteins work together to regulate gene transcription or chromatin configuration. In many instances, this can be achieved using sequential ChIP; or simply, ChIP-re-ChIP. Whether it is for the examination of specific transcriptional or epigenetic regulators, or for the identification of cistromes, the ability to perform a sequential ChIP adds a higher level of power and definition to these analyses. In this chapter, we describe a simple and reliable method for the sequential ChIP assay.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances
NASA Astrophysics Data System (ADS)
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-01
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-22
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Multi-sensor image registration based on algebraic projective invariants.
Li, Bin; Wang, Wei; Ye, Hao
2013-04-22
A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Salter-Venzon, Dawna; Kazlova, Valentina; Izzy Ford, Samantha; Intra, Janjira; Klosner, Allison E; Gellenbeck, Kevin W
2017-05-01
Despite the notable health benefits of carotenoids for human health, the majority of human diets worldwide are repeatedly shown to be inadequate in intake of carotenoid-rich fruits and vegetables, according to current health recommendations. To address this deficit, strategies designed to increase dietary intakes and subsequent plasma levels of carotenoids are warranted. When mixed carotenoids are delivered into the intestinal tract simultaneously, competition occurs for micelle formation and absorption, affecting carotenoid bioavailability. Previously, we tested the in vitro viability of a carotenoid mix designed to deliver individual carotenoids sequentially spaced from one another over the 6 hr transit time of the human upper gastrointestinal system. We hypothesized that temporally and spatially separating the individual carotenoids would reduce competition for micelle formation, improve uptake, and maximize efficacy. Here, we test this hypothesis in a double-blind, repeated-measure, cross-over human study with 12 subjects by comparing the change of plasma carotenoid levels for 8 hr after oral doses of a sequentially spaced carotenoid mix, to a matched mix without sequential spacing. We find the carotenoid change from baseline, measured as area under the curve, is increased following consumption of the sequentially spaced mix compared to concomitant carotenoids delivery. These results demonstrate reduced interaction and regulation between the sequentially spaced carotenoids, suggesting improved bioavailability from a novel sequentially spaced carotenoid mix.
Chuprom, Julalak; Bovornreungroj, Preeyanuch; Ahmad, Mehraj; Kantachote, Duangporn; Dueramae, Sawitree
2016-06-01
A new potent halophilic protease producer, Halobacterium sp. strain LBU50301 was isolated from salt-fermented fish samples ( budu ) and identified by phenotypic analysis, and 16S rDNA gene sequencing. Thereafter, sequential statistical strategy was used to optimize halophilic protease production from Halobacterium sp. strain LBU50301 by shake-flask fermentation. The classical one-factor-at-a-time (OFAT) approach determined gelatin was the best nitrogen source. Based on Plackett - Burman (PB) experimental design; gelatin, MgSO 4 ·7H 2 O, NaCl and pH significantly influenced the halophilic protease production. Central composite design (CCD) determined the optimum level of medium components. Subsequently, an 8.78-fold increase in corresponding halophilic protease yield (156.22 U/mL) was obtained, compared with that produced in the original medium (17.80 U/mL). Validation experiments proved the adequacy and accuracy of model, and the results showed the predicted value agreed well with the experimental values. An overall 13-fold increase in halophilic protease yield was achieved using a 3 L laboratory fermenter and optimized medium (231.33 U/mL).
Constant speed control of four-stroke micro internal combustion swing engine
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Zhu, Honghai; Ni, Jun
2015-09-01
The increasing demands on safety, emission and fuel consumption require more accurate control models of micro internal combustion swing engine (MICSE). The objective of this paper is to investigate the constant speed control models of four-stroke MICSE. The operation principle of the four-stroke MICSE is presented based on the description of MICSE prototype. A two-level Petri net based hybrid model is proposed to model the four-stroke MICSE engine cycle. The Petri net subsystem at the upper level controls and synchronizes the four Petri net subsystems at the lower level. The continuous sub-models, including breathing dynamics of intake manifold, thermodynamics of the chamber and dynamics of the torque generation, are investigated and integrated with the discrete model in MATLAB Simulink. Through the comparison of experimental data and simulated DC voltage output, it is demonstrated that the hybrid model is valid for the four-stroke MICSE system. A nonlinear model is obtained from the cycle average data via the regression method, and it is linearized around a given nominal equilibrium point for the controller design. The feedback controller of the spark timing and valve duration timing is designed with a sequential loop closing design approach. The simulation of the sequential loop closure control design applied to the hybrid model is implemented in MATLAB. The simulation results show that the system is able to reach its desired operating point within 0.2 s, and the designed controller shows good MICSE engine performance with a constant speed. This paper presents the constant speed control models of four-stroke MICSE and carries out the simulation tests, the models and the simulation results can be used for further study on the precision control of four-stroke MICSE.
Li, Bingcan; Mao, Xinrui; Wang, Yujuan; Guo, Chunyan
2017-01-01
It is generally accepted that associative recognition memory is supported by recollection. In addition, recent research indicates that familiarity can support associative memory, especially when two items are unitized into a single item. Both perceptual and conceptual manipulations can be used to unitize items, but few studies have compared these two methods of unitization directly. In the present study, we investigated the effects of familiarity and recollection on successful retrieval of items that were unitized perceptually or conceptually. Participants were instructed to remember either a Chinese two-character compound or unrelated word-pairs, which were presented simultaneously or sequentially. Participants were then asked to recognize whether word-pairs were intact or rearranged. Event-related potential (ERP) recordings were performed during the recognition phase of the study. Two-character compounds were better discriminated than unrelated word-pairs and simultaneous presentation was found to elicit better discrimination than sequential presentation for unrelated word-pairs only. ERP recordings indicated that the early intact/rearranged effects (FN400), typically associated with familiarity, were elicited in compound word-pairs with both simultaneous and sequential presentation, and in simultaneously presented unrelated word-pairs, but not in sequentially presented unrelated word-pairs. In contrast, the late positive complex (LPC) effects associated with recollection were elicited in all four conditions. Together, these results indicate that while the engagement of familiarity in associative recognition is affected by both perceptual and conceptual unitization, conceptual unitization promotes a higher level of unitization (LOU). In addition, the engagement of recollection was not affected by unitized manipulations. It should be noted, however, that due to experimental design, the effects presented here may be due to semantic rather than episodic memory and future studies should take this into consideration when manipulating rearranged pairs. PMID:28400723
ERIC Educational Resources Information Center
Ivankova, Nataliya V.
2014-01-01
In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…
Improved minimum cost and maximum power two stage genome-wide association study designs.
Stanhope, Stephen A; Skol, Andrew D
2012-01-01
In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.
Optimization of Melt Treatment for Austenitic Steel Grain Refinement
NASA Astrophysics Data System (ADS)
Lekakh, Simon N.; Ge, Jun; Richards, Von; O'Malley, Ron; TerBush, Jessica R.
2017-02-01
Refinement of the as-cast grain structure of austenitic steels requires the presence of active solid nuclei during solidification. These nuclei can be formed in situ in the liquid alloy by promoting reactions between transition metals (Ti, Zr, Nb, and Hf) and metalloid elements (C, S, O, and N) dissolved in the melt. Using thermodynamic simulations, experiments were designed to evaluate the effectiveness of a predicted sequence of reactions targeted to form precipitates that could act as active nuclei for grain refinement in austenitic steel castings. Melt additions performed to promote the sequential precipitation of titanium nitride (TiN) onto previously formed spinel (Al2MgO4) inclusions in the melt resulted in a significant refinement of the as-cast grain structure in heavy section Cr-Ni-Mo stainless steel castings. A refined as-cast structure consisting of an inner fine-equiaxed grain structure and outer columnar dendrite zone structure of limited length was achieved in experimental castings. The sequential of precipitation of TiN onto Al2MgO4 was confirmed using automated SEM/EDX and TEM analyses.
Experimental Array for Generating Dual Circularly-Polarized Dual-Mode OAM Radio Beams.
Bai, Xu-Dong; Liang, Xian-Ling; Sun, Yun-Tao; Hu, Peng-Cheng; Yao, Yu; Wang, Kun; Geng, Jun-Ping; Jin, Rong-Hong
2017-01-10
Recently, vortex beam carrying orbital angular momentum (OAM) for radio communications has attracted much attention for its potential of transmitting multiple signals simultaneously at the same frequency, which can be used to increase the channel capacity. However, most of the methods for getting multi-mode OAM radio beams are of complicated structure and very high cost. This paper provides an effective solution of generating dual circularly-polarized (CP) dual-mode OAM beams. The antenna consists of four dual-CP elements which are sequentially rotated 90 degrees in the clockwise direction. Different from all previous published research relating to OAM generation by phased arrays, the four elements are fed with the same phase for both left-hand circular polarization (LHCP) and right-hand circular polarization (RHCP). The dual-mode operation for OAM is achieved through the opposite phase differences generated for LHCP and RHCP, when the dual-CP elements are sequentially rotated in the clockwise direction. The measured results coincide well with the simulated ones, which verified the effectiveness of the proposed design.
Two-way sequential time synchronization: Preliminary results from the SIRIO-1 experiment
NASA Technical Reports Server (NTRS)
Detoma, E.; Leschiutta, S.
1981-01-01
A two-way time synchronization experiment performed in the spring of 1979 and 1980 via the Italian SIRIO-1 experimental telecommunications satellite is described. The experiment was designed and implemented to precisely monitor the satellite motion and to evaluate the possibility of performing a high precision, two-way time synchronization using a single communication channel, time-shared between the participating sites. Results show that the precision of the time synchronization is between 1 and 5 ns, while the evaluation and correction of the satellite motion effect was performed with an accuracy of a few nanoseconds or better over a time interval from 1 up to 20 seconds.
Modulated Acquisition of Spatial Distortion Maps
Volkov, Alexey; Gros, Jerneja Žganec; Žganec, Mario; Javornik, Tomaž; Švigelj, Aleš
2013-01-01
This work discusses a novel approach to image acquisition which improves the robustness of captured data required for 3D range measurements. By applying a pseudo-random code modulation to sequential acquisition of projected patterns the impact of environmental factors such as ambient light and mutual interference is significantly reduced. The proposed concept has been proven with an experimental range sensor based on the laser triangulation principle. The proposed design can potentially enhance the use of this principle to a variety of outdoor applications, such as autonomous vehicles, pedestrians' safety, collision avoidance, and many other tasks, where robust real-time distance detection in real world environment is crucial. PMID:23966196
Modulated acquisition of spatial distortion maps.
Volkov, Alexey; Gros, Jerneja Zganec; Zganec, Mario; Javornik, Tomaž; Svigelj, Aleš
2013-08-21
This work discusses a novel approach to image acquisition which improves the robustness of captured data required for 3D range measurements. By applying a pseudo-random code modulation to sequential acquisition of projected patterns the impact of environmental factors such as ambient light and mutual interference is significantly reduced. The proposed concept has been proven with an experimental range sensor based on the laser triangulation principle. The proposed design can potentially enhance the use of this principle to a variety of outdoor applications, such as autonomous vehicles, pedestrians' safety, collision avoidance, and many other tasks, where robust real-time distance detection in real world environment is crucial.
Training and generalization of laundry skills: a multiple probe evaluation with handicapped persons.
Thompson, T J; Braam, S J; Fugua, R W
1982-01-01
An instructional procedure composed of a graded sequence of prompts and token reinforcement was used to train a complex chain of behaviors which included sorting, washing, and drying clothes. A multiple probe design with sequential instruction across seven major components of the laundering routine was used to demonstrate experimental control. Students were taught to launder clothing using machines located in their school and generalization was assessed later on machines located in the public laundromat. A comparison of students' laundry skills with those of normal peers indicated similar levels of proficiency. Follow-up probes demonstrated maintenance of laundry skills over a 10-month period. PMID:7096228
Training and generalization of laundry skills: a multiple probe evaluation with handicapped persons.
Thompson, T J; Braam, S J; Fugua, R W
1982-01-01
An instructional procedure composed of a graded sequence of prompts and token reinforcement was used to train a complex chain of behaviors which included sorting, washing, and drying clothes. A multiple probe design with sequential instruction across seven major components of the laundering routine was used to demonstrate experimental control. Students were taught to launder clothing using machines located in their school and generalization was assessed later on machines located in the public laundromat. A comparison of students' laundry skills with those of normal peers indicated similar levels of proficiency. Follow-up probes demonstrated maintenance of laundry skills over a 10-month period.
Lineup composition, suspect position, and the sequential lineup advantage.
Carlson, Curt A; Gronlund, Scott D; Clark, Steven E
2008-06-01
N. M. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001) argued that sequential lineups reduce the likelihood of mistaken eyewitness identification. Experiment 1 replicated the design of R. C. L. Lindsay and G. L. Wells (1985), the first study to show the sequential lineup advantage. However, the innocent suspect was chosen at a lower rate in the simultaneous lineup, and no sequential lineup advantage was found. This led the authors to hypothesize that protection from a sequential lineup might emerge only when an innocent suspect stands out from the other lineup members. In Experiment 2, participants viewed a simultaneous or sequential lineup with either the guilty suspect or 1 of 3 innocent suspects. Lineup fairness was varied to influence the degree to which a suspect stood out. A sequential lineup advantage was found only for the unfair lineups. Additional analyses of suspect position in the sequential lineups showed an increase in the diagnosticity of suspect identifications as the suspect was placed later in the sequential lineup. These results suggest that the sequential lineup advantage is dependent on lineup composition and suspect position. (c) 2008 APA, all rights reserved
Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing
2013-03-01
Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Gallagher, C. B.; Ferraro, A.
2018-05-01
A possible alternative to the standard model of measurement-based quantum computation (MBQC) is offered by the sequential model of MBQC—a particular class of quantum computation via ancillae. Although these two models are equivalent under ideal conditions, their relative resilience to noise in practical conditions is not yet known. We analyze this relationship for various noise models in the ancilla preparation and in the entangling-gate implementation. The comparison of the two models is performed utilizing both the gate infidelity and the diamond distance as figures of merit. Our results show that in the majority of instances the sequential model outperforms the standard one in regard to a universal set of operations for quantum computation. Further investigation is made into the performance of sequential MBQC in experimental scenarios, thus setting benchmarks for possible cavity-QED implementations.
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
An information theoretic approach to use high-fidelity codes to calibrate low-fidelity codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, Allison, E-mail: lewis.allison10@gmail.com; Smith, Ralph; Williams, Brian
For many simulation models, it can be prohibitively expensive or physically infeasible to obtain a complete set of experimental data to calibrate model parameters. In such cases, one can alternatively employ validated higher-fidelity codes to generate simulated data, which can be used to calibrate the lower-fidelity code. In this paper, we employ an information-theoretic framework to determine the reduction in parameter uncertainty that is obtained by evaluating the high-fidelity code at a specific set of design conditions. These conditions are chosen sequentially, based on the amount of information that they contribute to the low-fidelity model parameters. The goal is tomore » employ Bayesian experimental design techniques to minimize the number of high-fidelity code evaluations required to accurately calibrate the low-fidelity model. We illustrate the performance of this framework using heat and diffusion examples, a 1-D kinetic neutron diffusion equation, and a particle transport model, and include initial results from the integration of the high-fidelity thermal-hydraulics code Hydra-TH with a low-fidelity exponential model for the friction correlation factor.« less
Investigation of microgravity effects on solidification phenomena of selected materials
NASA Technical Reports Server (NTRS)
Maag, Carl R.; Hansen, Patricia A.
1992-01-01
A Get Away Special (GAS) experiment payload to investigate microgravity effects on solidification phenomena of selected experimental samples has been designed for flight. It is intended that the first flight of the assembly will (1) study the p-n junction characteristics for advancing semiconductor device applications, (2) study the effects of gravity-driven convection on the growth of HgCd crystals, (3) compare the textures of the sample which crystallizes in microgravity with those found in chondrite meteorites, and (4) modify glass optical characteristics through divalent oxygen exchange. The space flight experiment consists of many small furnaces. While the experiment payload is in the low gravity environment of orbital flight, the payload controller will sequentially activate the furnaces to heat samples to their melt state and then allow cooling to resolidification in a controlled fashion. The materials processed in the microgravity environment of space will be compared to the same materials processed on earth in a one-gravity environment. This paper discusses the design of all subassemblies (furnance, electronics, and power systems) in the experiment. A complete description of the experimental materials is also presented.
Barone, Vincenzo; Bellina, Fabio; Biczysko, Malgorzata; Bloino, Julien; Fornaro, Teresa; Latouche, Camille; Lessi, Marco; Marianetti, Giulia; Minei, Pierpaolo; Panattoni, Alessandro; Pucci, Andrea
2015-10-28
The possibilities offered by organic fluorophores in the preparation of advanced plastic materials have been increased by designing novel alkynylimidazole dyes, featuring different push and pull groups. This new family of fluorescent dyes was synthesized by means of a one-pot sequential bromination-alkynylation of the heteroaromatic core, and their optical properties were investigated in tetrahydrofuran and in poly(methyl methacrylate). An efficient in silico pre-screening scheme was devised as consisting of a step-by-step procedure employing computational methodologies by simulation of electronic spectra within simple vertical energy and more sophisticated vibronic approaches. Such an approach was also extended to efficiently simulate one-photon absorption and emission spectra of the dyes in the polymer environment for their potential application in luminescent solar concentrators. Besides the specific applications of this novel material, the integration of computational and experimental techniques reported here provides an efficient protocol that can be applied to make a selection among similar dye candidates, which constitute the essential responsive part of those fluorescent plastic materials.
Sequential circuit design for radiation hardened multiple voltage integrated circuits
Clark, Lawrence T [Phoenix, AZ; McIver, III, John K.
2009-11-24
The present invention includes a radiation hardened sequential circuit, such as a bistable circuit, flip-flop or other suitable design that presents substantial immunity to ionizing radiation while simultaneously maintaining a low operating voltage. In one embodiment, the circuit includes a plurality of logic elements that operate on relatively low voltage, and a master and slave latches each having storage elements that operate on a relatively high voltage.
Mining of high utility-probability sequential patterns from uncertain databases
Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting
2017-01-01
High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847
Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.
Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty
2011-10-01
The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.
Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis
Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B
2011-01-01
Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damiani, Rick; Wendt, Fabian; Musial, Walter
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, themore » turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.« less
Accurately controlled sequential self-folding structures by polystyrene film
NASA Astrophysics Data System (ADS)
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Paturzo, Marco; Colaceci, Sofia; Clari, Marco; Mottola, Antonella; Alvaro, Rosaria; Vellone, Ercole
2016-01-01
. Mixed methods designs: an innovative methodological approach for nursing research. The mixed method research designs (MM) combine qualitative and quantitative approaches in the research process, in a single study or series of studies. Their use can provide a wider understanding of multifaceted phenomena. This article presents a general overview of the structure and design of MM to spread this approach in the Italian nursing research community. The MM designs most commonly used in the nursing field are the convergent parallel design, the sequential explanatory design, the exploratory sequential design and the embedded design. For each method a research example is presented. The use of MM can be an added value to improve clinical practices as, through the integration of qualitative and quantitative methods, researchers can better assess complex phenomena typical of nursing.
Structural Optimization for Reliability Using Nonlinear Goal Programming
NASA Technical Reports Server (NTRS)
El-Sayed, Mohamed E.
1999-01-01
This report details the development of a reliability based multi-objective design tool for solving structural optimization problems. Based on two different optimization techniques, namely sequential unconstrained minimization and nonlinear goal programming, the developed design method has the capability to take into account the effects of variability on the proposed design through a user specified reliability design criterion. In its sequential unconstrained minimization mode, the developed design tool uses a composite objective function, in conjunction with weight ordered design objectives, in order to take into account conflicting and multiple design criteria. Multiple design criteria of interest including structural weight, load induced stress and deflection, and mechanical reliability. The nonlinear goal programming mode, on the other hand, provides for a design method that eliminates the difficulty of having to define an objective function and constraints, while at the same time has the capability of handling rank ordered design objectives or goals. For simulation purposes the design of a pressure vessel cover plate was undertaken as a test bed for the newly developed design tool. The formulation of this structural optimization problem into sequential unconstrained minimization and goal programming form is presented. The resulting optimization problem was solved using: (i) the linear extended interior penalty function method algorithm; and (ii) Powell's conjugate directions method. Both single and multi-objective numerical test cases are included demonstrating the design tool's capabilities as it applies to this design problem.
Chatterji, Madhabi
2016-12-01
This paper explores avenues for navigating evaluation design challenges posed by complex social programs (CSPs) and their environments when conducting studies that call for generalizable, causal inferences on the intervention's effectiveness. A definition is provided of a CSP drawing on examples from different fields, and an evaluation case is analyzed in depth to derive seven (7) major sources of complexity that typify CSPs, threatening assumptions of textbook-recommended experimental designs for performing impact evaluations. Theoretically-supported, alternative methodological strategies are discussed to navigate assumptions and counter the design challenges posed by the complex configurations and ecology of CSPs. Specific recommendations include: sequential refinement of the evaluation design through systems thinking, systems-informed logic modeling; and use of extended term, mixed methods (ETMM) approaches with exploratory and confirmatory phases of the evaluation. In the proposed approach, logic models are refined through direct induction and interactions with stakeholders. To better guide assumption evaluation, question-framing, and selection of appropriate methodological strategies, a multiphase evaluation design is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.
Microcomputer Applications in Interaction Analysis.
ERIC Educational Resources Information Center
Wadham, Rex A.
The Timed Interval Categorical Observation Recorder (TICOR), a portable, battery powered microcomputer designed to automate the collection of sequential and simultaneous behavioral observations and their associated durations, was developed to overcome problems in gathering subtle interaction analysis data characterized by sequential flow of…
Claycamp, H Gregg; Kona, Ravikanth; Fahmy, Raafat; Hoag, Stephen W
2016-04-01
Qualitative risk assessment methods are often used as the first step to determining design space boundaries; however, quantitative assessments of risk with respect to the design space, i.e., calculating the probability of failure for a given severity, are needed to fully characterize design space boundaries. Quantitative risk assessment methods in design and operational spaces are a significant aid to evaluating proposed design space boundaries. The goal of this paper is to demonstrate a relatively simple strategy for design space definition using a simplified Bayesian Monte Carlo simulation. This paper builds on a previous paper that used failure mode and effects analysis (FMEA) qualitative risk assessment and Plackett-Burman design of experiments to identity the critical quality attributes. The results show that the sequential use of qualitative and quantitative risk assessments can focus the design of experiments on a reduced set of critical material and process parameters that determine a robust design space under conditions of limited laboratory experimentation. This approach provides a strategy by which the degree of risk associated with each known parameter can be calculated and allocates resources in a manner that manages risk to an acceptable level.
1984-06-01
SEQUENTIAL TESTING (Bldg. A, Room C) 1300-1330 ’ 1330-1415 1415-1445 1445-1515 BREAK 1515-1545 A TRUNCATED SEQUENTIAL PROBABILITY RATIO TEST J...suicide optical data operational testing reliability random numbers bootstrap methods missing data sequential testing fire support complex computer model carcinogenesis studies EUITION Of 1 NOV 68 I% OBSOLETE a ...contributed papers can be ascertained from the titles of the
Online Graph Completion: Multivariate Signal Recovery in Computer Vision.
Kim, Won Hwa; Jalal, Mona; Hwang, Seongjae; Johnson, Sterling C; Singh, Vikas
2017-07-01
The adoption of "human-in-the-loop" paradigms in computer vision and machine learning is leading to various applications where the actual data acquisition (e.g., human supervision) and the underlying inference algorithms are closely interwined. While classical work in active learning provides effective solutions when the learning module involves classification and regression tasks, many practical issues such as partially observed measurements, financial constraints and even additional distributional or structural aspects of the data typically fall outside the scope of this treatment. For instance, with sequential acquisition of partial measurements of data that manifest as a matrix (or tensor), novel strategies for completion (or collaborative filtering) of the remaining entries have only been studied recently. Motivated by vision problems where we seek to annotate a large dataset of images via a crowdsourced platform or alternatively, complement results from a state-of-the-art object detector using human feedback, we study the "completion" problem defined on graphs, where requests for additional measurements must be made sequentially. We design the optimization model in the Fourier domain of the graph describing how ideas based on adaptive submodularity provide algorithms that work well in practice. On a large set of images collected from Imgur, we see promising results on images that are otherwise difficult to categorize. We also show applications to an experimental design problem in neuroimaging.
Júnez-Ferreira, H E; Herrera, G S
2013-04-01
This paper presents a new methodology for the optimal design of space-time hydraulic head monitoring networks and its application to the Valle de Querétaro aquifer in Mexico. The selection of the space-time monitoring points is done using a static Kalman filter combined with a sequential optimization method. The Kalman filter requires as input a space-time covariance matrix, which is derived from a geostatistical analysis. A sequential optimization method that selects the space-time point that minimizes a function of the variance, in each step, is used. We demonstrate the methodology applying it to the redesign of the hydraulic head monitoring network of the Valle de Querétaro aquifer with the objective of selecting from a set of monitoring positions and times, those that minimize the spatiotemporal redundancy. The database for the geostatistical space-time analysis corresponds to information of 273 wells located within the aquifer for the period 1970-2007. A total of 1,435 hydraulic head data were used to construct the experimental space-time variogram. The results show that from the existing monitoring program that consists of 418 space-time monitoring points, only 178 are not redundant. The implied reduction of monitoring costs was possible because the proposed method is successful in propagating information in space and time.
Quasi-experimental study designs series-paper 1: introduction: two historical lineages.
Bärnighausen, Till; Røttingen, John-Arne; Rockers, Peter; Shemilt, Ian; Tugwell, Peter
2017-09-01
The objective of this study was to contrast the historical development of experiments and quasi-experiments and provide the motivation for a journal series on quasi-experimental designs in health research. A short historical narrative, with concrete examples, and arguments based on an understanding of the practice of health research and evidence synthesis. Health research has played a key role in developing today's gold standard for causal inference-the randomized controlled multiply blinded trial. Historically, allocation approaches developed from convenience and purposive allocation to alternate and, finally, to random allocation. This development was motivated both by concerns for manipulation in allocation as well as statistical and theoretical developments demonstrating the power of randomization in creating counterfactuals for causal inference. In contrast to the sequential development of experiments, quasi-experiments originated at very different points in time, from very different scientific perspectives, and with frequent and long interruptions in their methodological development. Health researchers have only recently started to recognize the value of quasi-experiments for generating novel insights on causal relationships. While quasi-experiments are unlikely to replace experiments in generating the efficacy and safety evidence required for clinical guidelines and regulatory approval of medical technologies, quasi-experiments can play an important role in establishing the effectiveness of health care practice, programs, and policies. The papers in this series describe and discuss a range of important issues in utilizing quasi-experimental designs for primary research and quasi-experimental results for evidence synthesis. Copyright © 2017 Elsevier Inc. All rights reserved.
Dose finding with the sequential parallel comparison design.
Wang, Jessie J; Ivanova, Anastasia
2014-01-01
The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.
A cycloidal wobble motor driven by shape memory alloy wires
NASA Astrophysics Data System (ADS)
Hwang, Donghyun; Higuchi, Toshiro
2014-05-01
A cycloidal wobble motor driven by shape memory alloy (SMA) wires is proposed. In realizing a motor driving mechanism well known as a type of reduction system, a cycloidal gear mechanism is utilized. It facilitates the achievement of bidirectional continuous rotation with high-torque capability, based on its high efficiency and high reduction ratio. The applied driving mechanism consists of a pin/roller based annular gear as a wobbler, a cycloidal disc as a rotor, and crankshafts to guide the eccentric wobbling motion. The wobbling motion of the annular gear is generated by sequential activation of radially phase-symmetrically placed SMA wires. Consequently the cycloidal disc is rotated by rolling contact based cycloidal gearing between the wobbler and the rotor. In designing the proposed motor, thermomechanical characterization of an SMA wire biased by extension springs is experimentally performed. Then, a simplified geometric model for the motor is devised to conduct theoretical assessment of design parametric effects on structural features and working performance. With consideration of the results from parametric analysis, a functional prototype three-phase motor is fabricated to carry out experimental verification of working performance. The observed experimental results including output torque, rotational speed, bidirectional positioning characteristic, etc obviously demonstrate the practical applicability and potentiality of the wobble motor.
NASA Astrophysics Data System (ADS)
Song, Xingliang; Sha, Pengfei; Fan, Yuanyuan; Jiang, R.; Zhao, Jiangshan; Zhou, Yi; Yang, Junhong; Xiong, Guangliang; Wang, Yu
2018-02-01
Due to complex kinetics of formation and loss mechanisms, such as ion-ion recombination reaction, neutral species harpoon reaction, excited state quenching and photon absorption, as well as their interactions, the performance behavior of different laser gas medium parameters for excimer laser varies greatly. Therefore, the effects of gas composition and total gas pressure on excimer laser performance attract continual research studies. In this work, orthogonal experimental design (OED) is used to investigate quantitative and qualitative correlations between output laser energy characteristics and gas medium parameters for an ArF excimer laser with plano-plano optical resonator operation. Optimized output laser energy with good pulse to pulse stability can be obtained effectively by proper selection of the gas medium parameters, which makes the most of the ArF excimer laser device. Simple and efficient method for gas medium optimization is proposed and demonstrated experimentally, which provides a global and systematic solution. By detailed statistical analysis, the significance sequence of relevant parameter factors and the optimized composition for gas medium parameters are obtained. Compared with conventional route of varying single gas parameter factor sequentially, this paper presents a more comprehensive way of considering multivariables simultaneously, which seems promising in striking an appropriate balance among various complicated parameters for power scaling study of an excimer laser.
Evaluation Using Sequential Trials Methods.
ERIC Educational Resources Information Center
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Probing finite coarse-grained virtual Feynman histories with sequential weak values
NASA Astrophysics Data System (ADS)
Georgiev, Danko; Cohen, Eliahu
2018-05-01
Feynman's sum-over-histories formulation of quantum mechanics has been considered a useful calculational tool in which virtual Feynman histories entering into a coherent quantum superposition cannot be individually measured. Here we show that sequential weak values, inferred by consecutive weak measurements of projectors, allow direct experimental probing of individual virtual Feynman histories, thereby revealing the exact nature of quantum interference of coherently superposed histories. Because the total sum of sequential weak values of multitime projection operators for a complete set of orthogonal quantum histories is unity, complete sets of weak values could be interpreted in agreement with the standard quantum mechanical picture. We also elucidate the relationship between sequential weak values of quantum histories with different coarse graining in time and establish the incompatibility of weak values for nonorthogonal quantum histories in history Hilbert space. Bridging theory and experiment, the presented results may enhance our understanding of both weak values and quantum histories.
Zeelenberg, René; Pecher, Diane
2015-03-01
Counterbalanced designs are frequently used in the behavioral sciences. Studies often counterbalance either the order in which conditions are presented in the experiment or the assignment of stimulus materials to conditions. Occasionally, researchers need to simultaneously counterbalance both condition order and stimulus assignment to conditions. Lewis (1989; Behavior Research Methods, Instruments, & Computers 25:414-415, 1993) presented a method for constructing Latin squares that fulfill these requirements. The resulting Latin squares counterbalance immediate sequential effects, but not remote sequential effects. Here, we present a new method for generating Latin squares that simultaneously counterbalance both immediate and remote sequential effects and assignment of stimuli to conditions. An Appendix is provided to facilitate implementation of these Latin square designs.
Anomalous weak values and the violation of a multiple-measurement Leggett-Garg inequality
NASA Astrophysics Data System (ADS)
Avella, Alessio; Piacentini, Fabrizio; Borsarelli, Michelangelo; Barbieri, Marco; Gramegna, Marco; Lussana, Rudi; Villa, Federica; Tosi, Alberto; Degiovanni, Ivo Pietro; Genovese, Marco
2017-11-01
Quantum mechanics presents peculiar properties that, on the one hand, have been the subject of several theoretical and experimental studies about its very foundations and, on the other hand, provide tools for developing new technologies, the so-called quantum technologies. The nonclassicality pointed out by Leggett-Garg inequalities has represented, with Bell inequalities, one of the most investigated subjects. In this article we study the connection of Leggett-Garg inequalities with a new emerging field of quantum measurement, the weak values in the case of a series of sequential measurements on a single object. In detail, we perform an experimental study of the four-time-correlator Leggett-Garg test, by exploiting single and sequential weak measurements performed on heralded single photons.
Palmer, Matthew A; Brewer, Neil
2012-06-01
When compared with simultaneous lineup presentation, sequential presentation has been shown to reduce false identifications to a greater extent than it reduces correct identifications. However, there has been much debate about whether this difference in identification performance represents improved discriminability or more conservative responding. In this research, data from 22 experiments that compared sequential and simultaneous lineups were analyzed using a compound signal-detection model, which is specifically designed to describe decision-making performance on tasks such as eyewitness identification tests. Sequential (cf. simultaneous) presentation did not influence discriminability, but produced a conservative shift in response bias that resulted in less-biased choosing for sequential than simultaneous lineups. These results inform understanding of the effects of lineup presentation mode on eyewitness identification decisions.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Boudreaux, Edwin D; Miller, Ivan; Goldstein, Amy B; Sullivan, Ashley F; Allen, Michael H; Manton, Anne P; Arias, Sarah A; Camargo, Carlos A
2013-09-01
Due to the concentration of individuals at-risk for suicide, an emergency department visit represents an opportune time for suicide risk screening and intervention. The Emergency Department Safety Assessment and Follow-up Evaluation (ED-SAFE) uses a quasi-experimental, interrupted time series design to evaluate whether (1) a practical approach to universally screening ED patients for suicide risk leads to improved detection of suicide risk and (2) a multi-component intervention delivered during and after the ED visit improves suicide-related outcomes. This paper summarizes the ED-SAFE's study design and methods within the context of considerations relevant to effectiveness research in suicide prevention and pertinent human participants concerns. 1440 suicidal individuals, from 8 general ED's nationally will be enrolled during three sequential phases of data collection (480 individuals/phase): (1) Treatment as Usual; (2) Universal Screening; and (3) Intervention. Data from the three phases will inform two separate evaluations: Screening Outcome (Phases 1 and 2) and Intervention (Phases 2 and 3). Individuals will be followed for 12 months. The primary study outcome is a composite reflecting completed suicide, attempted suicide, aborted or interrupted attempts, and implementation of rescue procedures during an outcome assessment. While 'classic' randomized control trials (RCT) are typically selected over quasi-experimental designs, ethical and methodological issues may make an RCT a poor fit for complex interventions in an applied setting, such as the ED. ED-SAFE represents an innovative approach to examining the complex public health issue of suicide prevention through a multi-phase, quasi-experimental design embedded in 'real world' clinical settings. Copyright © 2013 Elsevier Inc. All rights reserved.
Online optimal experimental re-design in robotic parallel fed-batch cultivation facilities.
Cruz Bournazou, M N; Barz, T; Nickel, D B; Lopez Cárdenas, D C; Glauche, F; Knepper, A; Neubauer, P
2017-03-01
We present an integrated framework for the online optimal experimental re-design applied to parallel nonlinear dynamic processes that aims to precisely estimate the parameter set of macro kinetic growth models with minimal experimental effort. This provides a systematic solution for rapid validation of a specific model to new strains, mutants, or products. In biosciences, this is especially important as model identification is a long and laborious process which is continuing to limit the use of mathematical modeling in this field. The strength of this approach is demonstrated by fitting a macro-kinetic differential equation model for Escherichia coli fed-batch processes after 6 h of cultivation. The system includes two fully-automated liquid handling robots; one containing eight mini-bioreactors and another used for automated at-line analyses, which allows for the immediate use of the available data in the modeling environment. As a result, the experiment can be continually re-designed while the cultivations are running using the information generated by periodical parameter estimations. The advantages of an online re-computation of the optimal experiment are proven by a 50-fold lower average coefficient of variation on the parameter estimates compared to the sequential method (4.83% instead of 235.86%). The success obtained in such a complex system is a further step towards a more efficient computer aided bioprocess development. Biotechnol. Bioeng. 2017;114: 610-619. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
SEQUENTIAL EXTRACTIONS FOR PARTITIONING OF ARSENIC ON HYDROUS IRON OXIDES AND IRON SULFIDES
The objective of this study was to use model solids to test solutions designed to extract arsenic from relatively labile solid phase fractions. The use of sequential extractions provides analytical constraints on the identification of mineral phases that control arsenic mobility...
THRESHOLD ELEMENTS AND THE DESIGN OF SEQUENTIAL SWITCHING NETWORKS.
The report covers research performed from March 1966 to March 1967. The major topics treated are: (1) methods for finding weight- threshold vectors...that realize a given switching function in multi- threshold linear logic; (2) synthesis of sequential machines by means of shift registers and simple
NASA Technical Reports Server (NTRS)
Sellen, J. M., Jr.; Kemp, R. F.; Hall, D. F.
1973-01-01
Doubly to singly charged mercury ion ratios in electron bombardment ion thruster exhaust beams have been determined as functions of bombardment discharge potential, thrust beam current, thrust beam radial position, acceleration-deceleration voltage ratio, and propellant utilization fraction. A mathematical model for two-step ionization processes has been derived, and calculated ion ratios are compared to observed ratios. Production of Hg(++) appears to result primarily from sequential ionization of Hg(+) in the discharge. Experimental and analytical results are presented, and design, construction, and operation features of an electrostatic deflection ion time-of-flight analyzer for the determination of the above-mentioned ratios are reviewed.
Color Breakup In Sequentially-Scanned LC Displays
NASA Technical Reports Server (NTRS)
Arend, L.; Lubin, J.; Gille, J.; Larimer, J.; Statler, Irving C. (Technical Monitor)
1994-01-01
In sequentially-scanned liquid-crystal displays the chromatic components of color pixels are distributed in time. For such displays eye, head, display, and image-object movements can cause the individual color elements to be visible. We analyze conditions (scan designs, types of eye movement) likely to produce color breakup.
Sequential Requests and the Problem of Message Sampling.
ERIC Educational Resources Information Center
Cantrill, James Gerard
S. Jackson and S. Jacobs's criticism of "single message" designs in communication research served as a framework for a study that examined the differences between various sequential request paradigms. The study sought to answer the following questions: (1) What were the most naturalistic request sequences assured to replicate…
The Motivating Language of Principals: A Sequential Transformative Strategy
ERIC Educational Resources Information Center
Holmes, William Tobias
2012-01-01
This study implemented a Sequential Transformative Mixed Methods design with teachers (as recipients) and principals (to give voice) in the examination of principal talk in two different school accountability contexts (Continuously Improving and Continuously Zigzag) using the conceptual framework of Motivating Language Theory. In phase one,…
Sequential and simultaneous choices: testing the diet selection and sequential choice models.
Freidin, Esteban; Aw, Justine; Kacelnik, Alex
2009-03-01
We investigate simultaneous and sequential choices in starlings, using Charnov's Diet Choice Model (DCM) and Shapiro, Siller and Kacelnik's Sequential Choice Model (SCM) to integrate function and mechanism. During a training phase, starlings encountered one food-related option per trial (A, B or R) in random sequence and with equal probability. A and B delivered food rewards after programmed delays (shorter for A), while R ('rejection') moved directly to the next trial without reward. In this phase we measured latencies to respond. In a later, choice, phase, birds encountered the pairs A-B, A-R and B-R, the first implementing a simultaneous choice and the second and third sequential choices. The DCM predicts when R should be chosen to maximize intake rate, and SCM uses latencies of the training phase to predict choices between any pair of options in the choice phase. The predictions of both models coincided, and both successfully predicted the birds' preferences. The DCM does not deal with partial preferences, while the SCM does, and experimental results were strongly correlated to this model's predictions. We believe that the SCM may expose a very general mechanism of animal choice, and that its wider domain of success reflects the greater ecological significance of sequential over simultaneous choices.
Awad, Ghada E A; Amer, Hassan; El-Gammal, Eman W; Helmy, Wafaa A; Esawy, Mona A; Elnashar, Magdy M M
2013-04-02
A sequential optimization strategy, based on statistical experimental designs, was employed to enhance the production of invertase by Lactobacillus brevis Mm-6 isolated from breast milk. First, a 2-level Plackett-Burman design was applied to screen the bioprocess parameters that significantly influence the invertase production. The second optimization step was performed using fractional factorial design in order to optimize the amounts of variables have the highest positive significant effect on the invertase production. A maximal enzyme activity of 1399U/ml was more than five folds the activity obtained using the basal medium. Invertase was immobilized onto grafted alginate beads to improve the enzyme's stability. Immobilization process increased the operational temperature from 30 to 60°C compared to the free enzyme. The reusability test proved the durability of the grafted alginate beads for 15 cycles with retention of 100% of the immobilized enzyme activity to be more convenient for industrial uses. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Karademir, Yavuz; Demir, Selcuk Besir
2015-01-01
The aim of this study is to ascertain the problems social studies teachers face in the teaching of topics covered in 8th grade TRHRK Course. The study was conducted in line with explanatory sequential mixed method design, which is one of the mixed research method, was used. The study involves three phases. In the first step, exploratory process…
A high level language for a high performance computer
NASA Technical Reports Server (NTRS)
Perrott, R. H.
1978-01-01
The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.
Hierarchical Bayes Models for Response Time Data
ERIC Educational Resources Information Center
Craigmile, Peter F.; Peruggia, Mario; Van Zandt, Trisha
2010-01-01
Human response time (RT) data are widely used in experimental psychology to evaluate theories of mental processing. Typically, the data constitute the times taken by a subject to react to a succession of stimuli under varying experimental conditions. Because of the sequential nature of the experiments there are trends (due to learning, fatigue,…
NASA Astrophysics Data System (ADS)
Hayat, A. Z.; Wahyu, W.; Kurnia
2018-05-01
This study aims to find out the improvement of cognitive ability of students on the implementation of cooperative learning model of peer-tutoring by using problem-solving approach. The research method used is mix method of Sequential Explanatory strategy and pretest post-test non-equivalent control group design. The participants involved in this study were 68 grade 10 students of Vocational High School in Bandung that consisted of 34 samples of experimental class and 34 samples of control class. The instruments used include written test and questionnaires. The improvement of cognitive ability of students was calculated using the N- gain formula. Differences of two average scores were calculated using t-test at significant level of α = 0.05. The result of study shows that the improvement of cognitive ability in experimental class was significantly different compared to the improvement in the control class at significant level of α = 0.05. The improvement of cognitive ability in experimental class is higher than in control class.
Group sequential designs for stepped-wedge cluster randomised trials
Grayling, Michael J; Wason, James MS; Mander, Adrian P
2017-01-01
Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.
The human as a detector of changes in variance and bandwidth
NASA Technical Reports Server (NTRS)
Curry, R. E.; Govindaraj, T.
1977-01-01
The detection of changes in random process variance and bandwidth was studied. Psychophysical thresholds for these two parameters were determined using an adaptive staircase technique for second order random processes at two nominal periods (1 and 3 seconds) and damping ratios (0.2 and 0.707). Thresholds for bandwidth changes were approximately 9% of nominal except for the (3sec,0.2) process which yielded thresholds of 12%. Variance thresholds averaged 17% of nominal except for the (3sec,0.2) process in which they were 32%. Detection times for suprathreshold changes in the parameters may be roughly described by the changes in RMS velocity of the process. A more complex model is presented which consists of a Kalman filter designed for the nominal process using velocity as the input, and a modified Wald sequential test for changes in the variance of the residual. The model predictions agree moderately well with the experimental data. Models using heuristics, e.g. level crossing counters, were also examined and are found to be descriptive but do not afford the unification of the Kalman filter/sequential test model used for changes in mean.
Jimenez, Julie; Gonidec, Estelle; Cacho Rivero, Jesús Andrés; Latrille, Eric; Vedrenne, Fabien; Steyer, Jean-Philippe
2014-03-01
Advanced dynamic anaerobic digestion models, such as ADM1, require both detailed organic matter characterisation and intimate knowledge of the involved metabolic pathways. In the current study, a methodology for municipal sludge characterization is investigated to describe two key parameters: biodegradability and bioaccessibility of organic matter. The methodology is based on coupling sequential chemical extractions with 3D fluorescence spectroscopy. The use of increasingly strong solvents reveals different levels of organic matter accessibility and the spectroscopy measurement leads to a detailed characterisation of the organic matter. The results obtained from testing 52 municipal sludge samples (primary, secondary, digested and thermally treated) showed a successful correlation with sludge biodegradability and bioaccessibility. The two parameters, traditionally obtained through the biochemical methane potential (BMP) lab tests, are now obtain in only 5 days compared to the 30-60 days usually required. Experimental data, obtained from two different laboratory scale reactors, were used to validate the ADM1 model. The proposed approach showed a strong application potential for reactor design and advanced control of anaerobic digestion processes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
NASA Astrophysics Data System (ADS)
Shi, Yunzhou; Zhang, Delong; Huff, Terry B.; Wang, Xiaofei; Shi, Riyi; Xu, Xiao-Ming; Cheng, Ji-Xin
2011-10-01
In vivo imaging of white matter is important for the mechanistic understanding of demyelination and evaluation of remyelination therapies. Although white matter can be visualized by a strong coherent anti-Stokes Raman scattering (CARS) signal from axonal myelin, in vivo repetitive CARS imaging of the spinal cord remains a challenge due to complexities induced by the laminectomy surgery. We present a careful experimental design that enabled longitudinal CARS imaging of de- and remyelination at single axon level in live rats. In vivo CARS imaging of secretory phospholipase A2 induced myelin vesiculation, macrophage uptake of myelin debris, and spontaneous remyelination by Schwann cells are sequentially monitored over a 3 week period. Longitudinal visualization of de- and remyelination at a single axon level provides a novel platform for rational design of therapies aimed at promoting myelin plasticity and repair.
A “SMART” Design for Building Individualized Treatment Sequences
Lei, H.; Nahum-Shani, I.; Lynch, K.; Oslin, D.; Murphy, S.A.
2013-01-01
Interventions often involve a sequence of decisions. For example, clinicians frequently adapt the intervention to an individual’s outcomes. Altering the intensity and type of intervention over time is crucial for many reasons, such as to obtain improvement if the individual is not responding or to reduce costs and burden when intensive treatment is no longer necessary. Adaptive interventions utilize individual variables (severity, preferences) to adapt the intervention and then dynamically utilize individual outcomes (response to treatment, adherence) to readapt the intervention. The Sequential Multiple Assignment Randomized Trial (SMART)provides high-quality data that can be used to construct adaptive interventions. We review the SMART and highlight its advantages in constructing and revising adaptive interventions as compared to alternative experimental designs. Selected examples of SMART studies are described and compared. A data analysis method is provided and illustrated using data from the Extending Treatment Effectiveness of Naltrexone SMART study. PMID:22224838
Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation
NASA Astrophysics Data System (ADS)
Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr
2017-12-01
Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the other hand, the described setup along with the presented methodology is a considerable improvement and an extension of methods constituting the state-of-the-art in the related field. Device flexibility both with developed analysis methodology can lead to further development of diagnostic methods and provide deeper insight into information processing in the human brain.
Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin
2016-01-01
Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.
Sequential and prosodic design of English and Greek non-valenced news receipts.
Kaimaki, Marianna
2012-03-01
Results arising from a prosodic and interactional study of the organization of everyday talk in English suggest that news receipts can be grouped into two categories: valenced (e.g., oh good) and non-valenced (e.g., oh really). In-depth investigation of both valenced and non-valenced news receipts shows that differences in their prosodic design do not seem to affect the sequential structure of the news informing sequence. News receipts with falling and rising pitch may have the same uptake and are treated in the same way by co-participants. A preliminary study of a Greek telephone corpus yielded the following receipts of news announcements: a malista, a(h) orea, a ne, a, oh. These are news markers composed of a standalone particle or a particle followed by an adverb or a response token (ne). Analysis of the sequential and prosodic design of Greek news announcement sequences is made to determine any interactional patterns and/or prosodic constraints. By examining the way in which co-participants display their interpretation of these turns I show that the phonological systems of contrast are different depending on the sequential environment, in much the same way that consonantal systems of contrast are not the same syllable initially and finally.
Prototype color field sequential television lens assembly
NASA Technical Reports Server (NTRS)
1974-01-01
The design, development, and evaluation of a prototype modular lens assembly with a self-contained field sequential color wheel is presented. The design of a color wheel of maximum efficiency, the selection of spectral filters, and the design of a quiet, efficient wheel drive system are included. Design tradeoffs considered for each aspect of the modular assembly are discussed. Emphasis is placed on achieving a design which can be attached directly to an unmodified camera, thus permitting use of the assembly in evaluating various candidate camera and sensor designs. A technique is described which permits maintaining high optical efficiency with an unmodified camera. A motor synchronization system is developed which requires only the vertical synchronization signal as a reference frequency input. Equations and tradeoff curves are developed to permit optimizing the filter wheel aperture shapes for a variety of different design conditions.
An inquiry approach to science and language teaching
NASA Astrophysics Data System (ADS)
Rodriguez, Imelda; Bethel, Lowell J.
The purpose of this study was to determine the effectiveness of an inquiry approach to science and language teaching to further develop classification and oral communication skills of bilingual Mexican American third graders. A random sample consisting of 64 subjects was selected for experimental and control groups from a population of 120 bilingual Mexican American third graders. The Solomon Four-Group experimental design was employed. Pre- and posttesting was performed by use of the Goldstein-Sheerer Object Sorting Test, (GSOST) and the Test of Oral Communication Skills, (TOCS). The experimental group participated in a sequential series of science lessons which required manipulation of objects, exploration, peer interaction, and teacher-pupil interaction. The children made observations and comparisons of familiar objects and then grouped them on the basis of perceived and inferred attributes. Children worked individually and in small groups. Analysis of variance procedures was used on the posttest scores to determine if there was a significant improvement in classification and oral communication skills in the experimental group. The results on the posttest scores indicated a significant improvement at the 0.01 level for the experimental group in both classification and oral communication skills. It was concluded that participation in the science inquiry lessons facilitated the development of classification and oral communication skills of bilingual children.
Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch
Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi
2010-01-01
Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and ‘memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch. PMID:20212522
Synthesizing a novel genetic sequential logic circuit: a push-on push-off switch.
Lou, Chunbo; Liu, Xili; Ni, Ming; Huang, Yiqi; Huang, Qiushi; Huang, Longwen; Jiang, Lingli; Lu, Dan; Wang, Mingcong; Liu, Chang; Chen, Daizhuo; Chen, Chongyi; Chen, Xiaoyue; Yang, Le; Ma, Haisu; Chen, Jianguo; Ouyang, Qi
2010-01-01
Design and synthesis of basic functional circuits are the fundamental tasks of synthetic biologists. Before it is possible to engineer higher-order genetic networks that can perform complex functions, a toolkit of basic devices must be developed. Among those devices, sequential logic circuits are expected to be the foundation of the genetic information-processing systems. In this study, we report the design and construction of a genetic sequential logic circuit in Escherichia coli. It can generate different outputs in response to the same input signal on the basis of its internal state, and 'memorize' the output. The circuit is composed of two parts: (1) a bistable switch memory module and (2) a double-repressed promoter NOR gate module. The two modules were individually rationally designed, and they were coupled together by fine-tuning the interconnecting parts through directed evolution. After fine-tuning, the circuit could be repeatedly, alternatively triggered by the same input signal; it functions as a push-on push-off switch.
All optical experimental design for neuron excitation, inhibition, and action potential detection
NASA Astrophysics Data System (ADS)
Walsh, Alex J.; Tolstykh, Gleb; Martens, Stacey; Sedelnikova, Anna; Ibey, Bennett L.; Beier, Hope T.
2016-03-01
Recently, infrared light has been shown to both stimulate and inhibit excitatory cells. However, studies of infrared light for excitatory cell inhibition have been constrained by the use of invasive and cumbersome electrodes for cell excitation and action potential recording. Here, we present an all optical experimental design for neuronal excitation, inhibition, and action potential detection. Primary rat neurons were transfected with plasmids containing the light sensitive ion channel CheRiff. CheRiff has a peak excitation around 450 nm, allowing excitation of transfected neurons with pulsed blue light. Additionally, primary neurons were transfected with QuasAr2, a fast and sensitive fluorescent voltage indicator. QuasAr2 is excited with yellow or red light and therefore does not spectrally overlap CheRiff, enabling imaging and action potential activation, simultaneously. Using an optic fiber, neurons were exposed to blue light sequentially to generate controlled action potentials. A second optic fiber delivered a single pulse of 1869nm light to the neuron causing inhibition of the evoked action potentials (by the blue light). When used in concert, these optical techniques enable electrode free neuron excitation, inhibition, and action potential recording, allowing research into neuronal behaviors with high spatial fidelity.
NASA Astrophysics Data System (ADS)
Koziel, Slawomir; Bekasiewicz, Adrian
2018-02-01
In this article, a simple yet efficient and reliable technique for fully automated multi-objective design optimization of antenna structures using sequential domain patching (SDP) is discussed. The optimization procedure according to SDP is a two-step process: (i) obtaining the initial set of Pareto-optimal designs representing the best possible trade-offs between considered conflicting objectives, and (ii) Pareto set refinement for yielding the optimal designs at the high-fidelity electromagnetic (EM) simulation model level. For the sake of computational efficiency, the first step is realized at the level of a low-fidelity (coarse-discretization) EM model by sequential construction and relocation of small design space segments (patches) in order to create a path connecting the extreme Pareto front designs obtained beforehand. The second stage involves response correction techniques and local response surface approximation models constructed by reusing EM simulation data acquired in the first step. A major contribution of this work is an automated procedure for determining the patch dimensions. It allows for appropriate selection of the number of patches for each geometry variable so as to ensure reliability of the optimization process while maintaining its low cost. The importance of this procedure is demonstrated by comparing it with uniform patch dimensions.
Sequential programmable self-assembly: Role of cooperative interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonathan D. Halverson; Tkachenko, Alexei V.
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
Sequential programmable self-assembly: Role of cooperative interactions
Jonathan D. Halverson; Tkachenko, Alexei V.
2016-03-04
Here, we propose a general strategy of “sequential programmable self-assembly” that enables a bottom-up design of arbitrary multi-particle architectures on nano- and microscales. We show that a naive realization of this scheme, based on the pairwise additive interactions between particles, has fundamental limitations that lead to a relatively high error rate. This can be overcome by using cooperative interparticle binding. The cooperativity is a well known feature of many biochemical processes, responsible, e.g., for signaling and regulations in living systems. Here we propose to utilize a similar strategy for high precision self-assembly, and show that DNA-mediated interactions provide a convenientmore » platform for its implementation. In particular, we outline a specific design of a DNA-based complex which we call “DNA spider,” that acts as a smart interparticle linker and provides a built-in cooperativity of binding. We demonstrate versatility of the sequential self-assembly based on spider-functionalized particles by designing several mesostructures of increasing complexity and simulating their assembly process. This includes a number of finite and repeating structures, in particular, the so-called tetrahelix and its several derivatives. Due to its generality, this approach allows one to design and successfully self-assemble virtually any structure made of a “GEOMAG” magnetic construction toy, out of nanoparticles. According to our results, once the binding cooperativity is strong enough, the sequential self-assembly becomes essentially error-free.« less
A Node Linkage Approach for Sequential Pattern Mining
Navarro, Osvaldo; Cumplido, René; Villaseñor-Pineda, Luis; Feregrino-Uribe, Claudia; Carrasco-Ochoa, Jesús Ariel
2014-01-01
Sequential Pattern Mining is a widely addressed problem in data mining, with applications such as analyzing Web usage, examining purchase behavior, and text mining, among others. Nevertheless, with the dramatic increase in data volume, the current approaches prove inefficient when dealing with large input datasets, a large number of different symbols and low minimum supports. In this paper, we propose a new sequential pattern mining algorithm, which follows a pattern-growth scheme to discover sequential patterns. Unlike most pattern growth algorithms, our approach does not build a data structure to represent the input dataset, but instead accesses the required sequences through pseudo-projection databases, achieving better runtime and reducing memory requirements. Our algorithm traverses the search space in a depth-first fashion and only preserves in memory a pattern node linkage and the pseudo-projections required for the branch being explored at the time. Experimental results show that our new approach, the Node Linkage Depth-First Traversal algorithm (NLDFT), has better performance and scalability in comparison with state of the art algorithms. PMID:24933123
Field-Sequential Color Converter
NASA Technical Reports Server (NTRS)
Studer, Victor J.
1989-01-01
Electronic conversion circuit enables display of signals from field-sequential color-television camera on color video camera. Designed for incorporation into color-television monitor on Space Shuttle, circuit weighs less, takes up less space, and consumes less power than previous conversion equipment. Incorporates state-of-art memory devices, also used in terrestrial stationary or portable closed-circuit television systems.
Apollo experience report: Command and service module sequential events control subsystem
NASA Technical Reports Server (NTRS)
Johnson, G. W.
1975-01-01
The Apollo command and service module sequential events control subsystem is described, with particular emphasis on the major systems and component problems and solutions. The subsystem requirements, design, and development and the test and flight history of the hardware are discussed. Recommendations to avoid similar problems on future programs are outlined.
An Undergraduate Survey Course on Asynchronous Sequential Logic, Ladder Logic, and Fuzzy Logic
ERIC Educational Resources Information Center
Foster, D. L.
2012-01-01
For a basic foundation in computer engineering, universities traditionally teach synchronous sequential circuit design, using discrete gates or field programmable gate arrays, and a microcomputers course that includes basic I/O processing. These courses, though critical, expose students to only a small subset of tools. At co-op schools like…
Terminating Sequential Delphi Survey Data Collection
ERIC Educational Resources Information Center
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Classification and assessment tools for structural motif discovery algorithms.
Badr, Ghada; Al-Turaiki, Isra; Mathkour, Hassan
2013-01-01
Motif discovery is the problem of finding recurring patterns in biological data. Patterns can be sequential, mainly when discovered in DNA sequences. They can also be structural (e.g. when discovering RNA motifs). Finding common structural patterns helps to gain a better understanding of the mechanism of action (e.g. post-transcriptional regulation). Unlike DNA motifs, which are sequentially conserved, RNA motifs exhibit conservation in structure, which may be common even if the sequences are different. Over the past few years, hundreds of algorithms have been developed to solve the sequential motif discovery problem, while less work has been done for the structural case. In this paper, we survey, classify, and compare different algorithms that solve the structural motif discovery problem, where the underlying sequences may be different. We highlight their strengths and weaknesses. We start by proposing a benchmark dataset and a measurement tool that can be used to evaluate different motif discovery approaches. Then, we proceed by proposing our experimental setup. Finally, results are obtained using the proposed benchmark to compare available tools. To the best of our knowledge, this is the first attempt to compare tools solely designed for structural motif discovery. Results show that the accuracy of discovered motifs is relatively low. The results also suggest a complementary behavior among tools where some tools perform well on simple structures, while other tools are better for complex structures. We have classified and evaluated the performance of available structural motif discovery tools. In addition, we have proposed a benchmark dataset with tools that can be used to evaluate newly developed tools.
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-01
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks (LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods. PMID:28146106
Learning to Monitor Machine Health with Convolutional Bi-Directional LSTM Networks.
Zhao, Rui; Yan, Ruqiang; Wang, Jinjiang; Mao, Kezhi
2017-01-30
In modern manufacturing systems and industries, more and more research efforts have been made in developing effective machine health monitoring systems. Among various machine health monitoring approaches, data-driven methods are gaining in popularity due to the development of advanced sensing and data analytic techniques. However, considering the noise, varying length and irregular sampling behind sensory data, this kind of sequential data cannot be fed into classification and regression models directly. Therefore, previous work focuses on feature extraction/fusion methods requiring expensive human labor and high quality expert knowledge. With the development of deep learning methods in the last few years, which redefine representation learning from raw data, a deep neural network structure named Convolutional Bi-directional Long Short-Term Memory networks (CBLSTM) has been designed here to address raw sensory data. CBLSTM firstly uses CNN to extract local features that are robust and informative from the sequential input. Then, bi-directional LSTM is introduced to encode temporal information. Long Short-Term Memory networks(LSTMs) are able to capture long-term dependencies and model sequential data, and the bi-directional structure enables the capture of past and future contexts. Stacked, fully-connected layers and the linear regression layer are built on top of bi-directional LSTMs to predict the target value. Here, a real-life tool wear test is introduced, and our proposed CBLSTM is able to predict the actual tool wear based on raw sensory data. The experimental results have shown that our model is able to outperform several state-of-the-art baseline methods.
NASA Astrophysics Data System (ADS)
Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.
2003-05-01
In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.
Reactivation, Replay, and Preplay: How It Might All Fit Together
Buhry, Laure; Azizi, Amir H.; Cheng, Sen
2011-01-01
Sequential activation of neurons that occurs during “offline” states, such as sleep or awake rest, is correlated with neural sequences recorded during preceding exploration phases. This so-called reactivation, or replay, has been observed in a number of different brain regions such as the striatum, prefrontal cortex, primary visual cortex and, most prominently, the hippocampus. Reactivation largely co-occurs together with hippocampal sharp-waves/ripples, brief high-frequency bursts in the local field potential. Here, we first review the mounting evidence for the hypothesis that reactivation is the neural mechanism for memory consolidation during sleep. We then discuss recent results that suggest that offline sequential activity in the waking state might not be simple repetitions of previously experienced sequences. Some offline sequential activity occurs before animals are exposed to a novel environment for the first time, and some sequences activated offline correspond to trajectories never experienced by the animal. We propose a conceptual framework for the dynamics of offline sequential activity that can parsimoniously describe a broad spectrum of experimental results. These results point to a potentially broader role of offline sequential activity in cognitive functions such as maintenance of spatial representation, learning, or planning. PMID:21918724
Decay modes of the Hoyle state in 12C
NASA Astrophysics Data System (ADS)
Zheng, H.; Bonasera, A.; Huang, M.; Zhang, S.
2018-04-01
Recent experimental results give an upper limit less than 0.043% (95% C.L.) to the direct decay of the Hoyle state into 3α respect to the sequential decay into 8Be + α. We performed one and two-dimensional tunneling calculations to estimate such a ratio and found it to be more than one order of magnitude smaller than experiment depending on the range of the nuclear force. This is within high statistics experimental capabilities. Our results can also be tested by measuring the decay modes of high excitation energy states of 12C where the ratio of direct to sequential decay might reach 10% at E*(12C) = 10.3 MeV. The link between a Bose Einstein Condensate (BEC) and the direct decay of the Hoyle state is also addressed. We discuss a hypothetical 'Efimov state' at E*(12C) = 7.458 MeV, which would mainly sequentially decay with 3α of equal energies: a counterintuitive result of tunneling. Such a state, if it would exist, is at least 8 orders of magnitude less probable than the Hoyle's, thus below the sensitivity of recent and past experiments.
Mammographic x-ray unit kilovoltage test tool based on k-edge absorption effect.
Napolitano, Mary E; Trueblood, Jon H; Hertel, Nolan E; David, George
2002-09-01
A simple tool to determine the peak kilovoltage (kVp) of a mammographic x-ray unit has been designed. Tool design is based on comparing the effect of k-edge discontinuity of the attenuation coefficient for a series of element filters. Compatibility with the mammography accreditation phantom (MAP) to obtain a single quality control film is a second design objective. When the attenuation of a series of sequential elements is studied simultaneously, differences in the absorption characteristics due to the k-edge discontinuities are more evident. Specifically, when the incident photon energy is higher than the k-edge energy of a number of the elements and lower than the remainder, an inflection may be seen in the resulting attenuation data. The maximum energy of the incident photon spectra may be determined based on this inflection point for a series of element filters. Monte Carlo photon transport analysis was used to estimate the photon transmission probabilities for each of the sequential k-edge filter elements. The photon transmission corresponds directly to optical density recorded on mammographic x-ray film. To observe the inflection, the element filters chosen must have k-edge energies that span a range greater than the expected range of the end point energies to be determined. For the design, incident x-ray spectra ranging from 25 to 40 kVp were assumed to be from a molybdenum target. Over this range, the k-edge energy changes by approximately 1.5 keV between sequential elements. For this design 21 elements spanning an energy range from 20 to 50 keV were chosen. Optimum filter element thicknesses were calculated to maximize attenuation differences at the k-edge while maintaining optical densities between 0.10 and 3.00. Calculated relative transmission data show that the kVp could be determined to within +/-1 kV. To obtain experimental data, a phantom was constructed containing 21 different elements placed in an acrylic holder. MAP images were used to determine appropriate exposure techniques for a series of end point energies from 25 to 35 kVp. The average difference between the kVp determination and the calibrated dial setting was 0.8 and 1.0 kV for a Senographe 600 T and a Senographe DMR, respectively. Since the k-edge absorption energies of the filter materials are well known, independent calibration or a series of calibration curves is not required.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.
Lu, Hongjing; Rojas, Randall R; Beckers, Tom; Yuille, Alan L
2016-03-01
Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that learning involves abstract transfer, and such transfer effects involve sequential presentation of distinct sets of causal cues. It has been demonstrated that pre-training (or even post-training) can modulate classic causal learning phenomena such as forward and backward blocking. To account for these effects, we propose a Bayesian theory of sequential causal learning. The theory assumes that humans are able to consider and use several alternative causal generative models, each instantiating a different causal integration rule. Model selection is used to decide which integration rule to use in a given learning environment in order to infer causal knowledge from sequential data. Detailed computer simulations demonstrate that humans rely on the abstract characteristics of outcome variables (e.g., binary vs. continuous) to select a causal integration rule, which in turn alters causal learning in a variety of blocking and overshadowing paradigms. When the nature of the outcome variable is ambiguous, humans select the model that yields the best fit with the recent environment, and then apply it to subsequent learning tasks. Based on sequential patterns of cue-outcome co-occurrence, the theory can account for a range of phenomena in sequential causal learning, including various blocking effects, primacy effects in some experimental conditions, and apparently abstract transfer of causal knowledge. Copyright © 2015 Cognitive Science Society, Inc.
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Effects of space environment on composites: An analytical study of critical experimental parameters
NASA Technical Reports Server (NTRS)
Gupta, A.; Carroll, W. F.; Moacanin, J.
1979-01-01
A generalized methodology currently employed at JPL, was used to develop an analytical model for effects of high-energy electrons and interactions between electron and ultraviolet effects. Chemical kinetic concepts were applied in defining quantifiable parameters; the need for determining short-lived transient species and their concentration was demonstrated. The results demonstrates a systematic and cost-effective means of addressing the issues and show qualitative and quantitative, applicable relationships between space radiation and simulation parameters. An equally important result is identification of critical initial experiments necessary to further clarify the relationships. Topics discussed include facility and test design; rastered vs. diffuse continuous e-beam; valid acceleration level; simultaneous vs. sequential exposure to different types of radiation; and interruption of test continuity.
Megajoule Dense Plasma Focus Solid Target Experiments
NASA Astrophysics Data System (ADS)
Podpaly, Y. A.; Falabella, S.; Link, A.; Povilus, A.; Higginson, D. P.; Shaw, B. H.; Cooper, C. M.; Chapman, S.; Bennett, N.; Sipe, N.; Olson, R.; Schmidt, A. E.
2016-10-01
Dense plasma focus (DPF) devices are plasma sources that can produce significant neutron yields from beam into gas interactions. Yield increases, up to approximately a factor of five, have been observed previously on DPFs using solid targets, such as CD2 and D2O ice. In this work, we report on deuterium solid-target experiments at the Gemini DPF. A rotatable target holder and baffle arrangement were installed in the Gemini device which allowed four targets to be deployed sequentially without breaking vacuum. Solid targets of titanium deuteride were installed and systematically studied at a variety of fill pressures, bias voltages, and target positions. Target holder design, experimental results, and comparison to simulations will be presented. Prepared by LLNL under Contract DE-AC52-07NA27344.
Automated microbial metabolism laboratory. [Viking 75 entry vehicle and Mars
NASA Technical Reports Server (NTRS)
1974-01-01
The labeled release concept was advanced to accommodate a post- Viking mission designed to extend the search, to confirm the presence of, and to characterize any Martian life found, and to obtain preliminary information on control of the life detected. The advanced labeled release concept utilizes four test chambers, each of which contains either an active or heat sterilized sample of the Martian soil. A variety of C-14 labeled organic substrates can be added sequentially to each soil sample and the resulting evolved radioactive gas monitored. The concept can also test effects of various inhibitors and environmental parameters on the experimental response. The current Viking '75 labeled release hardware is readily adaptable to the advanced labeled release concept.
Reversible logic gates on Physarum Polycephalum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Andrew
2015-03-10
In this paper, we consider possibilities how to implement asynchronous sequential logic gates and quantum-style reversible logic gates on Physarum polycephalum motions. We show that in asynchronous sequential logic gates we can erase information because of uncertainty in the direction of plasmodium propagation. Therefore quantum-style reversible logic gates are more preferable for designing logic circuits on Physarum polycephalum.
ERIC Educational Resources Information Center
Ayalon, Michal; Watson, Anne; Lerman, Steve
2015-01-01
This study investigates students' ways of attending to linear sequential data in two tasks, and conjectures possible relationships between those ways and elements of the task design. Drawing on the substantial literature about such situations, we focus for this paper on linear rate of change, and on covariation and correspondence approaches to…
ERIC Educational Resources Information Center
Ramaswamy, Ravishankar; Dix, Edward F.; Drew, Janet E.; Diamond, James J.; Inouye, Sharon K.; Roehl, Barbara J. O.
2011-01-01
Purpose of the Study: Delirium is a widespread concern for hospitalized seniors, yet is often unrecognized. A comprehensive and sequential intervention (CSI) aiming to effect change in clinician behavior by improving knowledge about delirium was tested. Design and Methods: A 2-day CSI program that consisted of progressive 4-part didactic series,…
Sustained State-Independent Quantum Contextual Correlations from a Single Ion
NASA Astrophysics Data System (ADS)
Leupold, F. M.; Malinowski, M.; Zhang, C.; Negnevitsky, V.; Alonso, J.; Home, J. P.; Cabello, A.
2018-05-01
We use a single trapped-ion qutrit to demonstrate the quantum-state-independent violation of noncontextuality inequalities using a sequence of randomly chosen quantum nondemolition projective measurements. We concatenate 53 ×106 sequential measurements of 13 observables, and unambiguously violate an optimal noncontextual bound. We use the same data set to characterize imperfections including signaling and repeatability of the measurements. The experimental sequence was generated in real time with a quantum random number generator integrated into our control system to select the subsequent observable with a latency below 50 μ s , which can be used to constrain contextual hidden-variable models that might describe our results. The state-recycling experimental procedure is resilient to noise and independent of the qutrit state, substantiating the fact that the contextual nature of quantum physics is connected to measurements and not necessarily to designated states. The use of extended sequences of quantum nondemolition measurements finds applications in the fields of sensing and quantum information.
Method for universal detection of two-photon polarization entanglement
NASA Astrophysics Data System (ADS)
Bartkiewicz, Karol; Horodecki, Paweł; Lemr, Karel; Miranowicz, Adam; Życzkowski, Karol
2015-03-01
Detecting and quantifying quantum entanglement of a given unknown state poses problems that are fundamentally important for quantum information processing. Surprisingly, no direct (i.e., without quantum tomography) universal experimental implementation of a necessary and sufficient test of entanglement has been designed even for a general two-qubit state. Here we propose an experimental method for detecting a collective universal witness, which is a necessary and sufficient test of two-photon polarization entanglement. It allows us to detect entanglement for any two-qubit mixed state and to establish tight upper and lower bounds on its amount. A different element of this method is the sequential character of its main components, which allows us to obtain relatively complicated information about quantum correlations with the help of simple linear-optical elements. As such, this proposal realizes a universal two-qubit entanglement test within the present state of the art of quantum optics. We show the optimality of our setup with respect to the minimal number of measured quantities.
Novel Designs of Quantum Reversible Counters
NASA Astrophysics Data System (ADS)
Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang
2016-11-01
Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.
Space Station Human Factors: Designing a Human-Robot Interface
NASA Technical Reports Server (NTRS)
Rochlis, Jennifer L.; Clarke, John Paul; Goza, S. Michael
2001-01-01
The experiments described in this paper are part of a larger joint MIT/NASA research effort and focus on the development of a methodology for designing and evaluating integrated interfaces for highly dexterous and multifunctional telerobot. Specifically, a telerobotic workstation is being designed for an Extravehicular Activity (EVA) anthropomorphic space station telerobot called Robonaut. Previous researchers have designed telerobotic workstations based upon performance of discrete subsets of tasks (for example, peg-in-hole, tracking, etc.) without regard for transitions that operators go through between tasks performed sequentially in the context of larger integrated tasks. The experiments presented here took an integrated approach to describing teleoperator performance and assessed how subjects operating a full-immersion telerobot perform during fine position and gross position tasks. In addition, a Robonaut simulation was also developed as part of this research effort, and experimentally tested against Robonaut itself to determine its utility. Results show that subject performance of teleoperated tasks using both Robonaut and the simulation are virtually identical, with no significant difference between the two. These results indicate that the simulation can be utilized as both a Robonaut training tool, and as a powerful design platform for telepresence displays and aids.
A response surface methodology based damage identification technique
NASA Astrophysics Data System (ADS)
Fang, S. E.; Perera, R.
2009-06-01
Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.
Rossi, G P; Seccia, T M; Miotto, D; Zucchetta, P; Cecchin, D; Calò, L; Puato, M; Motta, R; Caielli, P; Vincenzi, M; Ramondo, G; Taddei, S; Ferri, C; Letizia, C; Borghi, C; Morganti, A; Pessina, A C
2012-08-01
It is unclear whether revascularization of renal artery stenosis (RAS) by means of percutaneous renal angioplasty and stenting (PTRAS) is advantageous over optimal medical therapy. Hence, we designed a randomized clinical trial based on an optimized patient selection strategy and hard experimental endpoints. Primary objective of this study is to determine whether PTRAS is superior or equivalent to optimal medical treatment for preserving glomerular filtration rate (GFR) in the ischemic kidney as assessed by 99mTcDTPA sequential renal scintiscan. Secondary objectives of this study are to establish whether the two treatments are equivalent in lowering blood pressure, preserving overall renal function and regressing target organ damage, preventing cardiovascular events and improving quality of life. The study is designed as a prospective multicentre randomized, un-blinded two-arm study. Eligible patients will have clinical and angio-CT evidence of RAS. Inclusion criteria is RAS affecting the main renal artery or its major branches either >70% or, if <70, with post-stenotic dilatation. Renal function will be assessed with 99mTc-DTPA renal scintigraphy. Patients will be randomized to either arms considering both resistance index value in the ischemic kidney and the presence of unilateral/bilateral stenosis. Primary experimental endpoint will be the GFR of the ischemic kidney, assessed as quantitative variable by 99TcDTPA, and the loss of ischemic kidney defined as a categorical variable.
Crack closure and sequential effects in fatigue: A literature survey
NASA Astrophysics Data System (ADS)
Holmgren, M.
A literature survey of the phenomenon of crack closure is reported here. The state of the art is reviewed and several empirical formulas for determining the crack closure are compared with each other. Their properties, advantages and disadvantages are briefly discussed. Experimental techniques for determining the crack closure stress are presented and experimental data from the literature are reported.
Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.
Domenger, D; Schwarting, R K W
2008-10-31
Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired before lesion as tested here.
Soft robot design methodology for `push-button' manufacturing
NASA Astrophysics Data System (ADS)
Paik, Jamie
2018-06-01
`Push-button' or fully automated manufacturing would enable the production of robots with zero intervention from human hands. Realizing this utopia requires a fundamental shift from a sequential (design-materials-manufacturing) to a concurrent design methodology.
Using timed event sequential data in nursing research.
Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony
2015-01-01
Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.
McCarthy, Kathleen M; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G
2014-01-01
The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this study explored the perception and production of the English voicing contrast in 55 children (40 Sylheti-English sequential bilinguals and 15 English monolinguals). Children were tested twice: when they were in nursery (52-month-olds) and 1 year later. Sequential bilinguals' perception and production of English plosives were initially driven by their experience with their L1, but after starting school, changed to match that of their monolingual peers. PMID:25123987
Meta-analyses and adaptive group sequential designs in the clinical development process.
Jennison, Christopher; Turnbull, Bruce W
2005-01-01
The clinical development process can be viewed as a succession of trials, possibly overlapping in calendar time. The design of each trial may be influenced by results from previous studies and other currently proceeding trials, as well as by external information. Results from all of these trials must be considered together in order to assess the efficacy and safety of the proposed new treatment. Meta-analysis techniques provide a formal way of combining the information. We examine how such methods can be used in combining results from: (1) a collection of separate studies, (2) a sequence of studies in an organized development program, and (3) stages within a single study using a (possibly adaptive) group sequential design. We present two examples. The first example concerns the combining of results from a Phase IIb trial using several dose levels or treatment arms with those of the Phase III trial comparing the treatment selected in Phase IIb against a control This enables a "seamless transition" from Phase IIb to Phase III. The second example examines the use of combination tests to analyze data from an adaptive group sequential trial.
Chen, Chao-Jung; Li, Fu-An; Her, Guor-Rong
2008-05-01
A multiplexed CE-MS interface using four low-flow sheath liquid ESI sprayers has been developed. Because of the limited space between the low-flow sprayers and the entrance aperture of the ESI source, multichannel analysis is difficult using conventional rotating plate approaches. Instead, a multiplexed low-flow system was achieved by applying an ESI potential sequentially to the four low-flow sprayers, resulting in only one sprayer being sprayed at any given time. The synchronization of the scan event and the voltage relays was accomplished by using the data acquisition signal from the IT mass spectrometer. This synchronization resulted in the ESI voltage being sequentially applied to each of the four sprayers according to the corresponding scan event. With this design, a four-fold increase in analytical throughput was achieved. Because of the use of low-flow interfaces, this multiplexed system has superior sensitivity than a rotating plate design using conventional sheath liquid interfaces. The multiplexed design presented has the potential to be applied to other low-flow multiplexed systems, such as multiplexed capillary LC and multiplexed CEC.
Kumar, Piyush; Bhattacharjee, Tanmoy; Ingle, Arvind; Maru, Girish; Krishna, C Murali
2016-10-01
Oral cancers suffer from poor 5-year survival rates, owing to late detection of the disease. Current diagnostic/screening tools need to be upgraded in view of disadvantages like invasiveness, tedious sample preparation, long output times, and interobserver variances. Raman spectroscopy has been shown to identify many disease conditions, including oral cancers, from healthy conditions. Further studies in exploring sequential changes in oral carcinogenesis are warranted. In this Raman spectroscopy study, sequential progression in experimental oral carcinogenesis in Hamster buccal pouch model was investigated using 3 approaches-ex vivo, in vivo sequential, and in vivo follow-up. In all these studies, spectral changes show lipid dominance in early stages while later stages and tumors showed increased protein to lipid ratio and nucleic acids. On similar lines, early weeks of 7,12-dimethylbenz(a)anthracene-treated and control groups showed higher overlap and low classification. The classification efficiency increased progressively, reached a plateau phase and subsequently increased up to 100% by 14 weeks. The misclassifications between treated and control spectra suggested some changes in controls as well, which was confirmed by a careful reexamination of histopathological slides. These findings suggests Raman spectroscopy may be able to identify microheterogeneity, which may often go unnoticed in conventional biochemistry wherein tissue extracts are employed, as well as in histopathology. In vivo findings, quite comparable to gold-standard supported ex vivo findings, give further proof of Raman spectroscopy being a promising label-free, noninvasive diagnostic adjunct for future clinical applications. © The Author(s) 2015.
ERIC Educational Resources Information Center
McCarthy, Kathleen M.; Mahon, Merle; Rosen, Stuart; Evans, Bronwen G.
2014-01-01
The majority of bilingual speech research has focused on simultaneous bilinguals. Yet, in immigrant communities, children are often initially exposed to their family language (L1), before becoming gradually immersed in the host country's language (L2). This is typically referred to as sequential bilingualism. Using a longitudinal design, this…
Toombs, Elaine; Unruh, Anita; McGrath, Patrick
2018-01-01
This study aimed to assess the Parent-Adolescent Communication Toolkit, an online intervention designed to help improve parent communication with their adolescents. Participant preferences for two module delivery systems (sequential and unrestricted module access) were identified. Usability assessment of the PACT intervention was completed using pre-test and posttest comparisons. Usability data, including participant completion and satisfaction ratings were examined. Parents ( N = 18) of adolescents were randomized to a sequential or unrestricted chapter access group. Parent participants completed pre-test measures, the PACT intervention and posttest measures. Participants provided feedback for the intervention to improve modules and provided usability ratings. Adolescent pre- and posttest ratings were evaluated. Usability ratings were high and parent feedback was positive. The sequential module access groups rated the intervention content higher and completed more content than the unrestricted chapter access group, indicating support for the sequential access design. Parent mean posttest communication scores were significantly higher ( p < .05) than pre-test scores. No significant differences were detected for adolescent participants. Findings suggest that the Parent-Adolescent Communication Toolkit has potential to improve parent-adolescent communication but further effectiveness assessment is required.
SMA texture and reorientation: simulations and neutron diffraction studies
NASA Astrophysics Data System (ADS)
Gao, Xiujie; Brown, Donald W.; Brinson, L. Catherine
2005-05-01
With increased usage of shape memory alloys (SMA) for applications in various fields, it is important to understand how the material behavior is affected by factors such as texture, stress state and loading history, especially for complex multiaxial loading states. Using the in-situ neutron diffraction loading facility (SMARTS diffractometer) and ex situ inverse pole figure measurement facility (HIPPO diffractometer) at the Los Alamos Neutron Science Center (LANCE), the macroscopic mechanical behavior and texture evolution of Nickel-Titanium (Nitinol) SMAs under sequential compression in alternating directions were studied. The simplified multivariant model developed at Northwestern University was then used to simulate the macroscopic behavior and the microstructural change of Nitinol under this sequential loading. Pole figures were obtained via post-processing of the multivariant results for volume fraction evolution and compared quantitatively well to the experimental results. The experimental results can also be used to test or verify other SMA constitutive models.
Li, Haiou; Lu, Liyao; Chen, Rong; Quan, Lijun; Xia, Xiaoyan; Lü, Qiang
2014-01-01
Structural information related to protein-peptide complexes can be very useful for novel drug discovery and design. The computational docking of protein and peptide can supplement the structural information available on protein-peptide interactions explored by experimental ways. Protein-peptide docking of this paper can be described as three processes that occur in parallel: ab-initio peptide folding, peptide docking with its receptor, and refinement of some flexible areas of the receptor as the peptide is approaching. Several existing methods have been used to sample the degrees of freedom in the three processes, which are usually triggered in an organized sequential scheme. In this paper, we proposed a parallel approach that combines all the three processes during the docking of a folding peptide with a flexible receptor. This approach mimics the actual protein-peptide docking process in parallel way, and is expected to deliver better performance than sequential approaches. We used 22 unbound protein-peptide docking examples to evaluate our method. Our analysis of the results showed that the explicit refinement of the flexible areas of the receptor facilitated more accurate modeling of the interfaces of the complexes, while combining all of the moves in parallel helped the constructing of energy funnels for predictions.
NASA Astrophysics Data System (ADS)
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
2017-11-09
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rangarajan, Srinivas; Maravelias, Christos T.; Mavrikakis, Manos
Here, we present a general optimization-based framework for (i) ab initio and experimental data driven mechanistic modeling and (ii) optimal catalyst design of heterogeneous catalytic systems. Both cases are formulated as a nonlinear optimization problem that is subject to a mean-field microkinetic model and thermodynamic consistency requirements as constraints, for which we seek sparse solutions through a ridge (L 2 regularization) penalty. The solution procedure involves an iterative sequence of forward simulation of the differential algebraic equations pertaining to the microkinetic model using a numerical tool capable of handling stiff systems, sensitivity calculations using linear algebra, and gradient-based nonlinear optimization.more » A multistart approach is used to explore the solution space, and a hierarchical clustering procedure is implemented for statistically classifying potentially competing solutions. An example of methanol synthesis through hydrogenation of CO and CO 2 on a Cu-based catalyst is used to illustrate the framework. The framework is fast, is robust, and can be used to comprehensively explore the model solution and design space of any heterogeneous catalytic system.« less
Amorim, Fábio A C; Ferreira, Sérgio L C
2005-02-28
In the present paper, a simultaneous pre-concentration procedure for the sequential determination of cadmium and lead in table salt samples using flame atomic absorption spectrometry is proposed. This method is based on the liquid-liquid extraction of cadmium(II) and lead(II) ions as dithizone complexes and direct aspiration of the organic phase for the spectrometer. The sequential determination of cadmium and lead is possible using a computer program. The optimization step was performed by a two-level fractional factorial design involving the variables: pH, dithizone mass, shaking time after addition of dithizone and shaking time after addition of solvent. In the studied levels these variables are not significant. The experimental conditions established propose a sample volume of 250mL and the extraction process using 4.0mL of methyl isobutyl ketone. This way, the procedure allows determination of cadmium and lead in table salt samples with a pre-concentration factor higher than 80, and detection limits of 0.3ngg(-1) for cadmium and 4.2ngg(-1) for lead. The precision expressed as relative standard deviation (n = 10) were 5.6 and 2.6% for cadmium concentration of 2 and 20ngg(-1), respectively, and of 3.2 and 1.1% for lead concentration of 20 and 200ngg(-1), respectively. Recoveries of cadmium and lead in several samples, measured by standard addition technique, proved also that this procedure is not affected by the matrix and can be applied satisfactorily for the determination of cadmium and lead in saline samples. The method was applied for the evaluation of the concentration of cadmium and lead in table salt samples consumed in Salvador City, Bahia, Brazil.
Favorito, Jessica E; Luxton, Todd P; Eick, Matthew J; Grossl, Paul R
2017-10-01
Selenium is a trace element found in western US soils, where ingestion of Se-accumulating plants has resulted in livestock fatalities. Therefore, a reliable understanding of Se speciation and bioavailability is critical for effective mitigation. Sequential extraction procedures (SEP) are often employed to examine Se phases and speciation in contaminated soils but may be limited by experimental conditions. We examined the validity of a SEP using X-ray absorption spectroscopy (XAS) for both whole and a sequence of extracted soils. The sequence included removal of soluble, PO 4 -extractable, carbonate, amorphous Fe-oxide, crystalline Fe-oxide, organic, and residual Se forms. For whole soils, XANES analyses indicated Se(0) and Se(-II) predominated, with lower amounts of Se(IV) present, related to carbonates and Fe-oxides. Oxidized Se species were more elevated and residual/elemental Se was lower than previous SEP results from ICP-AES suggested. For soils from the SEP sequence, XANES results indicated only partial recovery of carbonate, Fe-oxide and organic Se. This suggests Se was incompletely removed during designated extractions, possibly due to lack of mineral solubilization or reagent specificity. Selenium fractions associated with Fe-oxides were reduced in amount or removed after using hydroxylamine HCl for most soils examined. XANES results indicate partial dissolution of solid-phases may occur during extraction processes. This study demonstrates why precautions should be taken to improve the validity of SEPs. Mineralogical and chemical characterizations should be completed prior to SEP implementation to identify extractable phases or mineral components that may influence extraction effectiveness. Sequential extraction procedures can be appropriately tailored for reliable quantification of speciation in contaminated soils. Copyright © 2017 Elsevier Ltd. All rights reserved.
Domain-general neural correlates of dependency formation: Using complex tones to simulate language.
Brilmayer, Ingmar; Sassenhagen, Jona; Bornkessel-Schlesewsky, Ina; Schlesewsky, Matthias
2017-08-01
There is an ongoing debate whether the P600 event-related potential component following syntactic anomalies reflects syntactic processes per se, or if it is an instance of the P300, a domain-general ERP component associated with attention and cognitive reorientation. A direct comparison of both components is challenging because of the huge discrepancy in experimental designs and stimulus choice between language and 'classic' P300 experiments. In the present study, we develop a new approach to mimic the interplay of sequential position as well as categorical and relational information in natural language syntax (word category and agreement) in a non-linguistic target detection paradigm using musical instruments. Participants were instructed to (covertly) detect target tones which were defined by instrument change and pitch rise between subsequent tones at the last two positions of four-tone sequences. We analysed the EEG using event-related averaging and time-frequency decomposition. Our results show striking similarities to results obtained from linguistic experiments. We found a P300 that showed sensitivity to sequential position and a late positivity sensitive to stimulus type and position. A time-frequency decomposition revealed significant effects of sequential position on the theta band and a significant influence of stimulus type on the delta band. Our results suggest that the detection of non-linguistic targets defined via complex feature conjunctions in the present study and the detection of syntactic anomalies share the same underlying processes: attentional shift and memory based matching processes that act upon multi-feature conjunctions. We discuss the results as supporting domain-general accounts of the P600 during natural language comprehension. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ng, Wai I; Smith, Graeme Drummond
2017-01-01
Self-management education programs (SMEPs) are potentially effective in the symptomatic management of COPD. Little is presently known about the effectiveness of these programs in Chinese COPD patients. The objective of this study was to evaluate the effectiveness of a specifically designed SMEP on levels of self-efficacy in Chinese patients with COPD. Based on the Medical Research Council framework for evaluating complex interventions, an exploratory phase randomized controlled trial was employed to examine the effects of an SMEP. Self-efficacy was the primary outcome using the COPD Self-efficacy Scale, measured at baseline and 6 months after the program. Qualitative data were sequentially collected from these patients via three focus groups to supplement the quantitative findings. The experimental group displayed significant improvement in their general self-efficacy ( Z =-2.44, P =0.015) and specifically in confronting 1) physical exertion ( Z =-2.57, P =0.01), 2) weather/environment effects ( Z =-2.63, P <0.001) and 3) intense emotions ( Z =-2.54, P =0.01). Three themes emerged from the focus groups: greater disease control, improved psychosocial well-being and perceived incapability and individuality. The connection of the quantitative and qualitative data demonstrated that individual perceptual constancy of patients could be a determining factor modulating the effectiveness of this type of intervention. These findings highlight the potential putative benefits of an SMEP in Chinese patients with COPD. Further attention should be given to cultural considerations when developing this type of intervention in Chinese populations with COPD and other chronic diseases.
Poster error probability in the Mu-11 Sequential Ranging System
NASA Technical Reports Server (NTRS)
Coyle, C. W.
1981-01-01
An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.
Violation of the Wiedemann-Franz law in a single-electron transistor.
Kubala, Björn; König, Jürgen; Pekola, Jukka
2008-02-15
We study the influence of Coulomb interaction on the thermoelectric transport coefficients for a metallic single-electron transistor. By performing a perturbation expansion up to second order in the tunnel-barrier conductance, we include sequential and cotunneling processes as well as quantum fluctuations that renormalize the charging energy and the tunnel conductance. We find that Coulomb interaction leads to a strong violation of the Wiedemann-Franz law: the Lorenz ratio becomes gate-voltage dependent for sequential tunneling, and is increased by a factor 9/5 in the cotunneling regime. Finally, we suggest a measurement scheme for an experimental realization.
ProperCAD: A portable object-oriented parallel environment for VLSI CAD
NASA Technical Reports Server (NTRS)
Ramkumar, Balkrishna; Banerjee, Prithviraj
1993-01-01
Most parallel algorithms for VLSI CAD proposed to date have one important drawback: they work efficiently only on machines that they were designed for. As a result, algorithms designed to date are dependent on the architecture for which they are developed and do not port easily to other parallel architectures. A new project under way to address this problem is described. A Portable object-oriented parallel environment for CAD algorithms (ProperCAD) is being developed. The objectives of this research are (1) to develop new parallel algorithms that run in a portable object-oriented environment (CAD algorithms using a general purpose platform for portable parallel programming called CARM is being developed and a C++ environment that is truly object-oriented and specialized for CAD applications is also being developed); and (2) to design the parallel algorithms around a good sequential algorithm with a well-defined parallel-sequential interface (permitting the parallel algorithm to benefit from future developments in sequential algorithms). One CAD application that has been implemented as part of the ProperCAD project, flat VLSI circuit extraction, is described. The algorithm, its implementation, and its performance on a range of parallel machines are discussed in detail. It currently runs on an Encore Multimax, a Sequent Symmetry, Intel iPSC/2 and i860 hypercubes, a NCUBE 2 hypercube, and a network of Sun Sparc workstations. Performance data for other applications that were developed are provided: namely test pattern generation for sequential circuits, parallel logic synthesis, and standard cell placement.
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
ERIC Educational Resources Information Center
Polat, Ahmet; Dogan, Soner; Demir, Selçuk Besir
2016-01-01
The present study was undertaken to investigate the quality of education based on the views of the students attending social studies education departments at the Faculties of Education and to determine the existing problems and present suggestions for their solutions. The study was conducted according to exploratory sequential mixed method. In…
Zhang, Heyi; Cheng, Biao; Lu, Zhan
2018-06-20
A newly designed thiazoline iminopyridine ligand for enantioselective cobalt-catalyzed sequential Nazarov cyclization/electrophilic fluorination was developed. Various chiral α-fluorocyclopentenones were prepared with good yields and diastereo- and enantioselectivities. Further derivatizations could be easily carried out to provide chiral cyclopentenols with three contiguous stereocenters. Furthermore, a direct deesterification of fluorinated products could afford chiral α-single fluorine-substituted cyclopentenones.
Diederich, Adele
2008-02-01
Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.
Multiplexed Holographic Optical Data Storage In Thick Bacteriorhodopsin Films
NASA Technical Reports Server (NTRS)
Downie, John D.; Timucin, Dogan A.; Gary, Charles K.; Ozcan, Meric; Smithey, Daniel T.; Crew, Marshall
1998-01-01
The optical data storage capacity of photochromic bacteriorhodopsin films is investigated by means of theoretical calculations, numerical simulations, and experimental measurements on sequential recording of angularly multiplexed diffraction gratings inside a thick D85N BR film.
Experimental evaluation of P-Y curves considering liquefaction development.
DOT National Transportation Integrated Search
2010-12-01
This report presents details and findings of a test series conducted on a single pile embedded in homogeneous saturated Nevada sand, which was subjected to sequential dynamic shaking and lateral (inertial-equivalent) loading. This report documents th...
An active UHF RFID localization system for fawn saving
NASA Astrophysics Data System (ADS)
Eberhardt, M.; Lehner, M.; Ascher, A.; Allwang, M.; Biebl, E. M.
2015-11-01
We present a localization concept for active UHF RFID transponders which enables mowing machine drivers to detect and localize marked fawns. The whole system design and experimental results with transponders located near the ground in random orientations in a meadow area are shown. The communication flow between reader and transponders is realized as a dynamic master-slave concept. Multiple marked fawns will be localized by processing detected transponders sequentially. With an eight-channel-receiver with integrated calibration method one can estimate the direction-of-arrival by measuring the phases of the transponder signals up to a range of 50 m in all directions. For further troubleshooting array manifolds have been measured. An additional hand-held receiver with a two-channel receiver allows a guided approaching search without endangering the fawn by the mowing machine.
A Parallel Saturation Algorithm on Shared Memory Architectures
NASA Technical Reports Server (NTRS)
Ezekiel, Jonathan; Siminiceanu
2007-01-01
Symbolic state-space generators are notoriously hard to parallelize. However, the Saturation algorithm implemented in the SMART verification tool differs from other sequential symbolic state-space generators in that it exploits the locality of ring events in asynchronous system models. This paper explores whether event locality can be utilized to efficiently parallelize Saturation on shared-memory architectures. Conceptually, we propose to parallelize the ring of events within a decision diagram node, which is technically realized via a thread pool. We discuss the challenges involved in our parallel design and conduct experimental studies on its prototypical implementation. On a dual-processor dual core PC, our studies show speed-ups for several example models, e.g., of up to 50% for a Kanban model, when compared to running our algorithm only on a single core.
NASA Astrophysics Data System (ADS)
Qian, Tingting; Wang, Lianlian; Lu, Guanghua
2017-07-01
Radar correlated imaging (RCI) introduces the optical correlated imaging technology to traditional microwave imaging, which has raised widespread concern recently. Conventional RCI methods neglect the structural information of complex extended target, which makes the quality of recovery result not really perfect, thus a novel combination of negative exponential restraint and total variation (NER-TV) algorithm for extended target imaging is proposed in this paper. The sparsity is measured by a sequential order one negative exponential function, then the 2D total variation technique is introduced to design a novel optimization problem for extended target imaging. And the proven alternating direction method of multipliers is applied to solve the new problem. Experimental results show that the proposed algorithm could realize high resolution imaging efficiently for extended target.
Controlled ecological life support systems (CELSS) flight experimentation
NASA Technical Reports Server (NTRS)
Kliss, M.; Macelroy, R.; Borchers, B.; Farrance, M.; Nelson, T.; Blackwell, C.; Yendler, B.; Tremor, J.
1994-01-01
The NASA CELSS program has the goal of developing life support systems for humans in space based on the use of higher plants. The program has supported research at universities with a primary focus of increasing the productivity of candidate crops plants. To understand the effects of the space environment on plant productivity, the CELSS Test Facility (CTF) has been conceived as an instrument that will permit the evaluation of plant productivity on Space Station Freedom. The CTF will maintain specific environmental conditions and collect data on gas exchange rates and biomass accumulation over the growth period of several crop plants grown sequentially from seed to harvest. The science requirements for the CTF will be described, as will current design concepts and specific technology requirements for operation in micro-gravity.
Using Priced Options to Solve the Exposure Problem in Sequential Auctions
NASA Astrophysics Data System (ADS)
Mous, Lonneke; Robu, Valentin; La Poutré, Han
This paper studies the benefits of using priced options for solving the exposure problem that bidders with valuation synergies face when participating in multiple, sequential auctions. We consider a model in which complementary-valued items are auctioned sequentially by different sellers, who have the choice of either selling their good directly or through a priced option, after fixing its exercise price. We analyze this model from a decision-theoretic perspective and we show, for a setting where the competition is formed by local bidders, that using options can increase the expected profit for both buyers and sellers. Furthermore, we derive the equations that provide minimum and maximum bounds between which a synergy buyer's bids should fall in order for both sides to have an incentive to use the options mechanism. Next, we perform an experimental analysis of a market in which multiple synergy bidders are active simultaneously.
Code of Federal Regulations, 2010 CFR
2010-07-01
... substantial deviations from the design specifications of the sampler specified for reference methods in... general requirements as an ISO 9001-registered facility for the design and manufacture of designated... capable of automatically collecting a series of sequential samples. NO means nitrogen oxide. NO 2 means...
Application and Design Characteristics of Generalized Training Devices.
ERIC Educational Resources Information Center
Parker, Edward L.
This program identified applications and developed design characteristics for generalized training devices. The first of three sequential phases reviewed in detail new developments in Naval equipment technology that influence the design of maintenance training devices: solid-state circuitry, modularization, digital technology, standardization,…
Selecting promising treatments in randomized Phase II cancer trials with an active control.
Cheung, Ying Kuen
2009-01-01
The primary objective of Phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of Phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concerns. In this study, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: The sample size requirement is reduced substantially and is feasible at an early stage of drug development.
Werk, Tobias; Mahler, Hanns-Christian; Ludwig, Imke Sonja; Luemkemann, Joerg; Huwyler, Joerg; Hafner, Mathias
Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products, which cannot be co-formulated due to technical or regulatory issues. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercial dual-chamber syringes (with a bypass designed as a longitudinal ridge) when the two liquids significantly differ in their physical properties (viscosity, density). However, an optimized dual-chamber syringe design with multiple bypass channels resulted in improved mixing of liquids. Dual-chamber syringes were originally designed to separate a solid substance and its diluent. However, they can also be used to separate liquid formulations of two individual drug products. A liquid/liquid dual-chamber syringe can be designed to achieve homogenization and mixing of both solutions prior to administration, or it can be used to sequentially inject both solutions. While sequential injection can be easily achieved by a dual-chamber syringe with a bypass located at the needle end of the syringe barrel, mixing of the two fluids may provide more challenges. Within this study, the mixing behavior of surrogate solutions in different dual-chamber syringes is assessed. Furthermore, the influence of parameters such as injection angle, injection speed, agitation, and sample viscosity were studied. It was noted that mixing was poor for the commercially available dual-chamber syringes when the two liquids significantly differ in viscosity and density. However, an optimized dual-chamber syringe design resulted in improved mixing of liquids. © PDA, Inc. 2017.
Experimenters' reference based upon Skylab experiment management
NASA Technical Reports Server (NTRS)
1974-01-01
The methods and techniques for experiment development and integration that evolved during the Skylab Program are described to facilitate transferring this experience to experimenters in future manned space programs. Management responsibilities and the sequential process of experiment evolution from initial concept through definition, development, integration, operation and postflight analysis are outlined and amplified, as appropriate. Emphasis is placed on specific lessons learned on Skylab that are worthy of consideration by future programs.
ERIC Educational Resources Information Center
Ronnlund, Michael; Nilsson, Lars-Goran
2008-01-01
To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…
Sequential design of discrete linear quadratic regulators via optimal root-locus techniques
NASA Technical Reports Server (NTRS)
Shieh, Leang S.; Yates, Robert E.; Ganesan, Sekar
1989-01-01
A sequential method employing classical root-locus techniques has been developed in order to determine the quadratic weighting matrices and discrete linear quadratic regulators of multivariable control systems. At each recursive step, an intermediate unity rank state-weighting matrix that contains some invariant eigenvectors of that open-loop matrix is assigned, and an intermediate characteristic equation of the closed-loop system containing the invariant eigenvalues is created.
Liu, Ying; ZENG, Donglin; WANG, Yuanjia
2014-01-01
Summary Dynamic treatment regimens (DTRs) are sequential decision rules tailored at each point where a clinical decision is made based on each patient’s time-varying characteristics and intermediate outcomes observed at earlier points in time. The complexity, patient heterogeneity, and chronicity of mental disorders call for learning optimal DTRs to dynamically adapt treatment to an individual’s response over time. The Sequential Multiple Assignment Randomized Trial (SMARTs) design allows for estimating causal effects of DTRs. Modern statistical tools have been developed to optimize DTRs based on personalized variables and intermediate outcomes using rich data collected from SMARTs; these statistical methods can also be used to recommend tailoring variables for designing future SMART studies. This paper introduces DTRs and SMARTs using two examples in mental health studies, discusses two machine learning methods for estimating optimal DTR from SMARTs data, and demonstrates the performance of the statistical methods using simulated data. PMID:25642116
Sequential parallel comparison design with binary and time-to-event outcomes.
Silverman, Rachel Kloss; Ivanova, Anastasia; Fine, Jason
2018-04-30
Sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials especially trials with possibly high placebo effect. Sequential parallel comparison design is conducted with 2 stages. Participants are randomized between active therapy and placebo in stage 1. Then, stage 1 placebo nonresponders are rerandomized between active therapy and placebo. Data from the 2 stages are pooled to yield a single P value. We consider SPCD with binary and with time-to-event outcomes. For time-to-event outcomes, response is defined as a favorable event prior to the end of follow-up for a given stage of SPCD. We show that for these cases, the usual test statistics from stages 1 and 2 are asymptotically normal and uncorrelated under the null hypothesis, leading to a straightforward combined testing procedure. In addition, we show that the estimators of the treatment effects from the 2 stages are asymptotically normal and uncorrelated under the null and alternative hypothesis, yielding confidence interval procedures with correct coverage. Simulations and real data analysis demonstrate the utility of the binary and time-to-event SPCD. Copyright © 2018 John Wiley & Sons, Ltd.
Vandelanotte, Corneel; De Bourdeaudhuij, Ilse; Sallis, James F; Spittaels, Heleen; Brug, Johannes
2005-04-01
Little evidence exists about the effectiveness of "interactive" computer-tailored interventions and about the combined effectiveness of tailored interventions on physical activity and diet. Furthermore, it is unknown whether they should be executed sequentially or simultaneously. The purpose of this study was to examine (a) the effectiveness of interactive computer-tailored interventions for increasing physical activity and decreasing fat intake and (b) which intervening mode, sequential or simultaneous, is most effective in behavior change. Participants (N = 771) were randomly assigned to receive (a) the physical activity and fat intake interventions simultaneously at baseline, (b) the physical activity intervention at baseline and the fat intake intervention 3 months later, (c) the fat intake intervention at baseline and the physical activity intervention 3 months later, or (d) a place in the control group. Six months postbaseline, the results showed that the tailored interventions produced significantly higher physical activity scores, F(2, 573) = 11.4, p < .001, and lower fat intake scores, F(2, 565) = 31.4, p < .001, in the experimental groups when compared to the control group. For both behaviors, the sequential and simultaneous intervening modes showed to be effective; however, for the fat intake intervention and for the participants who did not meet the recommendation in the physical activity intervention, the simultaneous mode appeared to work better than the sequential mode.
Lai, Zhihong; La Noce, Anna
2016-01-01
The global development of a biosimilar product is a methodologically complex affair, lined with potential design pitfalls and operational missteps to be avoided. Without careful attention to experimental design and meticulous execution, a development programme may fail to demonstrate equivalence, as would be anticipated for a biosimilar product, and not receive regulatory approval based on current guidance. In order to demonstrate similarity of a biosimilar product versus the originator (ie, the branded product), based on regulatory guidance, a stepwise approach is usually taken, starting with a comprehensive structural and functional characterisation of the new biological moiety. Given the sequential nature of the review process, the extent and nature of the non-clinical in vivo studies and the clinical studies to be performed depend on the level of evidence obtained in these previous step(s). A clinical efficacy trial is often required to further demonstrate biosimilarity of the two products (biosimilar vs branded) in terms of comparative safety and effectiveness. Owing to the focus on demonstrating biosimilarity and not safety and efficacy de novo, designing an adequate phase III (potentially pivotal) clinical efficacy study of a biosimilar may present some unique challenges. Using adalimumab as an example, we highlight design elements that may deserve special attention.
Amosa, Mutiu K
2016-08-01
Sorption optimization and mechanism of hardness and alkalinity on bifunctional empty fruit bunch-based powdered activation carbon (PAC) were studied. The PAC possessed both high surface area and ion-exchange properties, and it was utilized in the treatment of biotreated palm oil mill effluent. Batch adsorption experiments designed with Design Expert(®) were conducted in correlating the singular and interactive effects of the three adsorption parameters: PAC dosage, agitation speed and contact time. The sorption trends of the two contaminants were sequentially assessed through a full factorial design with three factor interaction models and a central composite design with polynomial models of quadratic order. Analysis of variance revealed the significant factors on each design response with very high R(2) values indicating good agreement between model and experimental values. The optimum operating conditions of the two contaminants differed due to their different regions of operating interests, thus necessitating the utility of desirability factor to get consolidated optimum operation conditions. The equilibrium data for alkalinity and hardness sorption were better represented by the Langmuir isotherm, while the pseudo-second-order kinetic model described the adsorption rates and behavior better. It was concluded that chemisorption contributed majorly to the adsorption process.
BEopt - Building Energy Optimization BEopt NREL - National Renewable Energy Laboratory Primary Energy Optimization) software provides capabilities to evaluate residential building designs and identify sequential search optimization technique used by BEopt: Finds minimum-cost building designs at different
When to Use What Research Design
ERIC Educational Resources Information Center
Vogt, W. Paul; Gardner, Dianne C.; Haeffele, Lynne M.
2012-01-01
Systematic, practical, and accessible, this is the first book to focus on finding the most defensible design for a particular research question. Thoughtful guidelines are provided for weighing the advantages and disadvantages of various methods, including qualitative, quantitative, and mixed methods designs. The book can be read sequentially or…
Designing User-Computer Dialogues: Basic Principles and Guidelines.
ERIC Educational Resources Information Center
Harrell, Thomas H.
This discussion of the design of computerized psychological assessment or testing instruments stresses the importance of the well-designed computer-user interface. The principles underlying the three main functional elements of computer-user dialogue--data entry, data display, and sequential control--are discussed, and basic guidelines derived…
A sequential linear optimization approach for controller design
NASA Technical Reports Server (NTRS)
Horta, L. G.; Juang, J.-N.; Junkins, J. L.
1985-01-01
A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.
EXPERIMENTAL AND MATHEMATICAL MODELING METHODS FOR THE INVESTIGATION OF TOXICOLOGICAL INTERACTIONS
While procedures have been developed and used for many years to assess risk and determine acceptable exposure levels to individual chemicals, most cases of environmental contamination can result in concurrent or sequential exposure to more than one chemical. Toxicological predict...
NASA Technical Reports Server (NTRS)
Gai, E. G.; Curry, R. E.
1978-01-01
An investigation of the behavior of the human decisionmaker is described for a task related to the problem of a pilot using a traffic situation display to avoid collisions. This sequential signal detection task is characterized by highly correlated signals with time varying strength. Experimental results are presented and the behavior of the observers is analyzed using the theory of Markov processes and classical signal detection theory. Mathematical models are developed which describe the main result of the experiment: that correlation in sequential signals induced perseveration in the observer response and a strong tendency to repeat their previous decision, even when they were wrong.
NASA Astrophysics Data System (ADS)
Zhang, Bin; Gan, Yi; Xu, Chang-Qing
2018-06-01
The field sequential modulation of a Nd:YVO4/MgO:PPLN intra-cavity, frequency doubling green laser was studied. The modulation frequency was set at 1 kHz and the duty cycle was changed from 20% to CW operation. It was shown that the quasi-phase matched (QPM) temperature decreases with an increase of the modulation duty cycle, and in turn causing the peak efficiency to rise. It was found that the temperature change in MgO:PPLN and the thermal lens effect in Nd:YVO4 crystal were the respective origins of these observed experimental phenomena.
Catalán, Javier; del Valle, Juan Carlos; Kasha, Michael
1999-01-01
The experimental and theoretical bases for a synchronous or concerted double-proton transfer in centro-symmetric H-bonded electronically excited molecular dimers are presented. The prototype model is the 7-azaindole dimer. New research offers confirmation of a concerted mechanism for excited-state biprotonic transfer. Recent femtosecond photoionization and coulombic explosion techniques have given rise to time-of-flight MS observations suggesting sequential two-step biprotonic transfer for the same dimer. We interpret the overall species observed in the time-of-flight experiments as explicable without conflict with the concerted mechanism of proton transfer. PMID:10411876
Grodowska, Katarzyna; Parczewski, Andrzej
2013-01-01
The purpose of the present work was to find optimum conditions of headspace gas chromatography (HS-GC) determination of residual solvents which usually appear in pharmaceutical products. Two groups of solvents were taken into account in the present examination. Group I consisted of isopropanol, n-propanol, isobutanol, n-butanol and 1,4-dioxane and group II included cyclohexane, n-hexane and n-heptane. The members of the groups were selected in previous investigations in which experimental design and chemometric methods were applied. Four factors were taken into consideration in optimization which describe HS conditions: sample volume, equilibration time, equilibrium temperature and NaCl concentration in a sample. The relative GC peak area served as an optimization criterion which was considered separately for each analyte. Sequential variable size simplex optimization strategy was used and the progress of optimization was traced and visualized in various ways simultaneously. The optimum HS conditions appeared different for the groups of solvents tested, which proves that influence of experimental conditions (factors) depends on analyte properties. The optimization resulted in significant signal increase (from seven to fifteen times).
Wild, Aaron T; Gandhi, Nishant; Chettiar, Sivarajan T; Aziz, Khaled; Gajula, Rajendra P; Williams, Russell D; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F; Cosgrove, David; Pawlik, Timothy M; Maitra, Anirban; Wong, John; Hales, Russell K; Torbenson, Michael S; Herman, Joseph M; Tran, Phuoc T
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design.
Chettiar, Sivarajan T.; Aziz, Khaled; Gajula, Rajendra P.; Williams, Russell D.; Kumar, Rachit; Taparra, Kekoa; Zeng, Jing; Cades, Jessica A.; Velarde, Esteban; Menon, Siddharth; Geschwind, Jean F.; Cosgrove, David; Pawlik, Timothy M.; Maitra, Anirban; Wong, John; Hales, Russell K.; Torbenson, Michael S.; Herman, Joseph M.; Tran, Phuoc T.
2013-01-01
Sorafenib (SOR) is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC). However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT) for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs) persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design. PMID:23762417
Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M
2016-06-01
This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in up to at least 10 noninoculated batches of lettuce washed sequentially in the same water. © 2016 The Society for Applied Microbiology.
THRESHOLD LOGIC SYNTHESIS OF SEQUENTIAL MACHINES.
The application of threshold logic to the design of sequential machines is the subject of this research. A single layer of threshold logic units in...advantages of fewer components because of the use of threshold logic, along with very high-speed operation resulting from the use of only a single layer of...logic. In some instances, namely for asynchronous machines, the only delay need be the natural delay of the single layer of threshold elements. It is
MSFC Skylab experimenter's reference
NASA Technical Reports Server (NTRS)
1974-01-01
The methods and techniques for experiment development and integration that evolved during the Skylab Program are described to facilitate transferring this experience to experimenters in future manned space programs. Management responsibilities and the sequential process of experiment evolution from initial concept through definition, development, integration, operation and postflight analysis are outlined in the main text and amplified, as appropriate, in appendixes. Emphasis is placed on specific lessons learned on Skylab that are worthy of consideration by future programs.
An experimental study of perforated muzzle brakes
NASA Astrophysics Data System (ADS)
Dillon, R. E., Jr.; Nagamatsu, H. T.
1984-06-01
A firing test was conducted to examine the recoil efficiency and blast characteristics of perforated muzzle brakes fitted to a 20 mm cannon. Recoil impulse blast overpressures, muzzle velocity, sequential spark shadowgraphs, and photographs of the muzzle flash structure were obtained. Three different nuzzle devices were used with one device equipped with pressure transducers to measure the static pressure in the brake. Experimental results are compared with the earlier predictions of Dillon and Nagamatsu.
Fracture resistance of retreated roots using different retreatment systems.
Er, Kursat; Tasdemir, Tamer; Siso, Seyda Herguner; Celik, Davut; Cora, Sabri
2011-08-01
This study was designed to evaluate the fracture resistance of retreated roots using different rotary retreatment systems. Forty eight freshly extracted human canine teeth with single straight root canals were instrumented sequentially increasing from size 30 to a size 55 using K-files whit a stepback technique. The teeth were randomly divided into three experimental and one control groups of 12 specimens each. The root canals were filled using cold lateral compaction of gutta-percha and AH Plus (Dentsply Detrey, Konstanz, Germany) sealer in experimental groups. Removal of gutta-percha was performed with the following devices and techniques: ProTaper Universal (Dentsply Maillefer, Ballaigues, Switzerland), R-Endo (Micro-Mega, Besançon, France), and Mtwo (Sweden & Martina, Padova, Italy) rotary retreatment systems. Control group specimens were only instrumented, not filled or retreated. The specimens were then mounted in copper rings, were filled with a self-curing polymethylmethacrylate resin, and the force required to cause vertical root fracture was measured using a universal testing device. The force of fracture of the roots was recorded and the results in the various groups were compared. Statistical analysis was accomplished by one-way ANOVA and a post hoc Tukey tests. There were statistically significant differences between the control and experimental groups (P<.05). However, there were no significant differences among the experimental groups. Based on the results, all rotary retreatment techniques used in this in vitro study produced similar root weakness.
Ebrahimi, Ahmad; Kia, Reza; Komijan, Alireza Rashidi
2016-01-01
In this article, a novel integrated mixed-integer nonlinear programming model is presented for designing a cellular manufacturing system (CMS) considering machine layout and part scheduling problems simultaneously as interrelated decisions. The integrated CMS model is formulated to incorporate several design features including part due date, material handling time, operation sequence, processing time, an intra-cell layout of unequal-area facilities, and part scheduling. The objective function is to minimize makespan, tardiness penalties, and material handling costs of inter-cell and intra-cell movements. Two numerical examples are solved by the Lingo software to illustrate the results obtained by the incorporated features. In order to assess the effects and importance of integration of machine layout and part scheduling in designing a CMS, two approaches, sequentially and concurrent are investigated and the improvement resulted from a concurrent approach is revealed. Also, due to the NP-hardness of the integrated model, an efficient genetic algorithm is designed. As a consequence, computational results of this study indicate that the best solutions found by GA are better than the solutions found by B&B in much less time for both sequential and concurrent approaches. Moreover, the comparisons between the objective function values (OFVs) obtained by sequential and concurrent approaches demonstrate that the OFV improvement is averagely around 17 % by GA and 14 % by B&B.
Economic Factors in Tunnel Construction
DOT National Transportation Integrated Search
1979-02-01
This report describes a new cost estimating system for tunneling. The system is designed so that it may be used to aid planners, engineers, and designers in evaluating the cost impact of decisions they may make during the sequential stages of plannin...
Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. Limitations of available resources prevent experimental toxicology from providing health risk information about all the possible mixtures to which humans or other spec...
Lei, Jie; Peng, Bing; Min, Xiaobo; Liang, Yanjie; You, Yang; Chai, Liyuan
2017-04-16
This study focuses on the modeling and optimization of lime-based stabilization in high alkaline arsenic-bearing sludges (HAABS) and describes the relationship between the arsenic leachate concentration (ALC) and stabilization parameters to develop a prediction model for obtaining the optimal process parameters and conditions. A central composite design (CCD) along with response surface methodology (RSM) was conducted to model and investigate the stabilization process with three independent variables: the Ca/As mole ratio, reaction time and liquid/solid ratio, along with their interactions. The obvious characteristic changes of the HAABS before and after stabilization were verified by X-ray diffraction (XRD), scanning electron microscopy (SEM), particle size distribution (PSD) and the community bureau of reference (BCR) sequential extraction procedure. A prediction model Y (ALC) with a statistically significant P-value <0.01 and high correlation coefficient R 2 = 93.22% was obtained. The optimal parameters were successfully predicted by the model for the minimum ALC of 0.312 mg/L, which was validated with the experimental result (0.306 mg/L). The XRD, SEM and PSD results indicated that crystal calcium arsenate Ca 5 (AsO 4 ) 3 OH and Ca 4 (OH) 2 (AsO 4 ) 2 ·4H 2 O formation played an important role in minimizing the ALC. The BCR sequential extraction results demonstrated that the treated HAABS were stable in a weak acidic environment for a short time but posed a potential environmental risk after a long time. The results clearly confirm that the proposed three-factor CCD is an effective approach for modeling the stabilization of HAABS. However, further solidification technology is suggested for use after lime-based stabilization treatment of arsenic-bearing sludges.
Ng, Wai I; Smith, Graeme Drummond
2017-01-01
Background Self-management education programs (SMEPs) are potentially effective in the symptomatic management of COPD. Little is presently known about the effectiveness of these programs in Chinese COPD patients. The objective of this study was to evaluate the effectiveness of a specifically designed SMEP on levels of self-efficacy in Chinese patients with COPD. Materials and methods Based on the Medical Research Council framework for evaluating complex interventions, an exploratory phase randomized controlled trial was employed to examine the effects of an SMEP. Self-efficacy was the primary outcome using the COPD Self-efficacy Scale, measured at baseline and 6 months after the program. Qualitative data were sequentially collected from these patients via three focus groups to supplement the quantitative findings. Results The experimental group displayed significant improvement in their general self-efficacy (Z =−2.44, P=0.015) and specifically in confronting 1) physical exertion (Z =−2.57, P=0.01), 2) weather/environment effects (Z =−2.63, P<0.001) and 3) intense emotions (Z =−2.54, P=0.01). Three themes emerged from the focus groups: greater disease control, improved psychosocial well-being and perceived incapability and individuality. The connection of the quantitative and qualitative data demonstrated that individual perceptual constancy of patients could be a determining factor modulating the effectiveness of this type of intervention. Conclusion These findings highlight the potential putative benefits of an SMEP in Chinese patients with COPD. Further attention should be given to cultural considerations when developing this type of intervention in Chinese populations with COPD and other chronic diseases. PMID:28790816
Wang, Jian-ya; Fang, Zhao-lun
2002-02-01
A microchip flow cell was developed for flow injection renewable surface assay by reflectance spectrophotometry. The flow cell was coupled to a sequential injection system and optical fiber photometric detection system. The flow cell featured a three-layer structure. The flow channel was cut into a silicone rubber membrance which formed the middle layer, and a porous filter was inlayed across a widened section of the channel to trap microbeads introduced into the flow cell. The area of the detection window of the flow cell was approximately 3.6 mm2, the volume of the bead trapped in the flow cell was 2.2 microL, the depth of the bead layer was 600 microns. A multistrand bifurcated optical fiber was coupled with incident light, detector and flow cell. The chromogenic reaction of Cr(VI) with 1,5-diphenylcarbohydrazide (DPC) which was adsorbed on trapped Polysorb C-18 beads was used as a model reaction to optimize the flow cell design and the experimental system. The reflectance of the renewable reaction surface was monitored at 540 nm. With 100 microL sample loaded and 1.0 mL.min-1 carrier flow rate, the linear response range was 0-0.6 microgram.mL-1 Cr(VI). A detection limit (3 sigma) of 6 ng.mL-1, precision of 1.5% RSD(n = 11), and a throughput of 64 samples per hour were achieved. Considerations in system and flow cell design, the influence of depth of the bead layer, weight of beads used, and the flow rates of carrier stream on the performance were discussed.
Almeida, Teresa Cristina Abreu; Larentis, Ariane Leites; Ferraz, Helen Conceição
2015-01-01
The study of the stability of concentrated oil-in-water emulsions is imperative to provide a scientific approach for an important problem in the beverage industry, contributing to abolish the empiricism still present nowadays. The use of these emulsions would directly imply a reduction of transportation costs between production and the sales points, where dilution takes place. The goal of this research was to evaluate the influence of the main components of a lemon emulsion on its stability, aiming to maximize the concentration of oil in the beverage and to correlate its physicochemical characteristics to product stability, allowing an increase of shelf life of the final product. For this purpose, analyses of surface and interface tension, electrokinetic potential, particle size and rheological properties of the emulsions were conducted. A 24-1 fractional factorial design was performed with the following variables: lemon oil/water ratio (30% to 50%), starch and Arabic gum concentrations (0% to 30%) and dioctyl sodium sulfosuccinate (0 mg/L to 100 mg/L), including an evaluation of the responses at the central conditions of each variable. Sequentially, a full design was prepared to evaluate the two most influential variables obtained in the first plan, in which concentration of starch and gum ranged from 0% to 20%, while concentration of lemon oil/water ratio was fixed at 50%, without dioctyl sodium sulfosuccinate. Concentrated emulsions with stability superior to 15 days were obtained with either starch or Arabic gum and 50% lemon oil. The most stable formulations presented viscosity over 100 cP and ratio between the surface tension of the emulsion and the mucilage of over 1. These two answers were selected, since they better represent the behavior of emulsions in terms of stability and could be used as tools for an initial selection of the most promising formulations. PMID:25793301
Almeida, Teresa Cristina Abreu; Larentis, Ariane Leites; Ferraz, Helen Conceição
2015-01-01
The study of the stability of concentrated oil-in-water emulsions is imperative to provide a scientific approach for an important problem in the beverage industry, contributing to abolish the empiricism still present nowadays. The use of these emulsions would directly imply a reduction of transportation costs between production and the sales points, where dilution takes place. The goal of this research was to evaluate the influence of the main components of a lemon emulsion on its stability, aiming to maximize the concentration of oil in the beverage and to correlate its physicochemical characteristics to product stability, allowing an increase of shelf life of the final product. For this purpose, analyses of surface and interface tension, electrokinetic potential, particle size and rheological properties of the emulsions were conducted. A 2(4-1) fractional factorial design was performed with the following variables: lemon oil/water ratio (30% to 50%), starch and Arabic gum concentrations (0% to 30%) and dioctyl sodium sulfosuccinate (0 mg/L to 100 mg/L), including an evaluation of the responses at the central conditions of each variable. Sequentially, a full design was prepared to evaluate the two most influential variables obtained in the first plan, in which concentration of starch and gum ranged from 0% to 20%, while concentration of lemon oil/water ratio was fixed at 50%, without dioctyl sodium sulfosuccinate. Concentrated emulsions with stability superior to 15 days were obtained with either starch or Arabic gum and 50% lemon oil. The most stable formulations presented viscosity over 100 cP and ratio between the surface tension of the emulsion and the mucilage of over 1. These two answers were selected, since they better represent the behavior of emulsions in terms of stability and could be used as tools for an initial selection of the most promising formulations.
ERIC Educational Resources Information Center
Gibson, Michael R.
2016-01-01
"Designing backwards" is presented here as a means to utilize human-centered processes in diverse educational settings to help teachers and students learn to formulate and operate design processes to achieve three sequential and interrelated goals. The first entails teaching them to effectively and empathetically identify, frame and…
Heiles, Sven; Cooper, Richard J.; DiTucci, Matthew J.
2017-01-01
Sequential water molecule binding enthalpies, ΔH n,n–1, are important for a detailed understanding of competitive interactions between ions, water and solute molecules, and how these interactions affect physical properties of ion-containing nanodrops that are important in aerosol chemistry. Water molecule binding enthalpies have been measured for small clusters of many different ions, but these values for ion-containing nanodrops containing more than 20 water molecules are scarce. Here, ΔH n,n–1 values are deduced from high-precision ultraviolet photodissociation (UVPD) measurements as a function of ion identity, charge state and cluster size between 20–500 water molecules and for ions with +1, +2 and +3 charges. The ΔH n,n–1 values are obtained from the number of water molecules lost upon photoexcitation at a known wavelength, and modeling of the release of energy into the translational, rotational and vibrational motions of the products. The ΔH n,n–1 values range from 36.82 to 50.21 kJ mol–1. For clusters containing more than ∼250 water molecules, the binding enthalpies are between the bulk heat of vaporization (44.8 kJ mol–1) and the sublimation enthalpy of bulk ice (51.0 kJ mol–1). These values depend on ion charge state for clusters with fewer than 150 water molecules, but there is a negligible dependence at larger size. There is a minimum in the ΔH n,n–1 values that depends on the cluster size and ion charge state, which can be attributed to the competing effects of ion solvation and surface energy. The experimental ΔH n,n–1 values can be fit to the Thomson liquid drop model (TLDM) using bulk ice parameters. By optimizing the surface tension and temperature change of the logarithmic partial pressure for the TLDM, the experimental sequential water molecule binding enthalpies can be fit with an accuracy of ±3.3 kJ mol–1 over the entire range of cluster sizes. PMID:28451364
High throughput screening of CO2 solubility in aqueous monoamine solutions.
Porcheron, Fabien; Gibert, Alexandre; Mougin, Pascal; Wender, Aurélie
2011-03-15
Post-combustion Carbon Capture and Storage technology (CCS) is viewed as an efficient solution to reduce CO(2) emissions of coal-fired power stations. In CCS, an aqueous amine solution is commonly used as a solvent to selectively capture CO(2) from the flue gas. However, this process generates additional costs, mostly from the reboiler heat duty required to release the carbon dioxide from the loaded solvent solution. In this work, we present thermodynamic results of CO(2) solubility in aqueous amine solutions from a 6-reactor High Throughput Screening (HTS) experimental device. This device is fully automated and designed to perform sequential injections of CO(2) within stirred-cell reactors containing the solvent solutions. The gas pressure within each reactor is monitored as a function of time, and the resulting transient pressure curves are transformed into CO(2) absorption isotherms. Solubility measurements are first performed on monoethanolamine, diethanolamine, and methyldiethanolamine aqueous solutions at T = 313.15 K. Experimental results are compared with existing data in the literature to validate the HTS device. In addition, a comprehensive thermodynamic model is used to represent CO(2) solubility variations in different classes of amine structures upon a wide range of thermodynamic conditions. This model is used to fit the experimental data and to calculate the cyclic capacity, which is a key parameter for CO(2) process design. Solubility measurements are then performed on a set of 50 monoamines and cyclic capacities are extracted using the thermodynamic model, to asses the potential of these molecules for CO(2) capture.
Zhang, Nan; Chen, Tian-Hu; Zhou, Yue-Fei; Li, Shao-Jie; Jin, Jie; Wang, Yan-Ming
2012-04-01
Mine tailings in Xiangsi Valley, Tongling, China, is a typical skarn-type tailing with high contents of carbonates. This study designed dynamic leaching experiments to investigate the efficiency of this tailing under the acid mine drainage treatment. During 80 d trial period, the physical and chemical properties of influents were fixed and the effluents were monitored. After the trial, the speciation of Fe, Cu and Zn in solid was analyzed. The results showed that during the trial period, pH value maintained above 7.5. Moreover, the concentrations of Cu, Zn, Fe ions in effluents kept below 0.1, 0.4 and 1 mg x L(-1), respectively. In addition, the permeability coefficient of experimental column kept decreasing during the experimental period (from 0.23 cm x s(-1) to 0.10 cm x s(-1)). Five-step sequential extraction method was employed to study the distribution of elements at different depths. The results showed that Cu2+, Zn2+ were removed mainly through sorption and precipitation. This study indicates that Tongling skarn mine tailings have strong acid neutralization as well as heavy metal binding capacities. Therefore, the authors suggest that this mine tailing, which used to be waste, has a potential in AMD control and treatment.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
2007-02-01
which is used by the model to drive the normal activities of the crew (Figure C.1-2). These routines consist of a sequential list of high- level...separately. Figure C.1-3: Resources & Logic Sheet C.1.1.4 Scenario The scenario that is performed during a model run is a sequential list of all...were marked with a white fore and aft lineup stripe on both landing spots. Current Sea Fighter design does not provide a hangar; however, there
Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi
2016-02-01
The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Shi, Ruijia; Xu, Cunshuan
2011-06-01
The study of rat proteins is an indispensable task in experimental medicine and drug development. The function of a rat protein is closely related to its subcellular location. Based on the above concept, we construct the benchmark rat proteins dataset and develop a combined approach for predicting the subcellular localization of rat proteins. From protein primary sequence, the multiple sequential features are obtained by using of discrete Fourier analysis, position conservation scoring function and increment of diversity, and these sequential features are selected as input parameters of the support vector machine. By the jackknife test, the overall success rate of prediction is 95.6% on the rat proteins dataset. Our method are performed on the apoptosis proteins dataset and the Gram-negative bacterial proteins dataset with the jackknife test, the overall success rates are 89.9% and 96.4%, respectively. The above results indicate that our proposed method is quite promising and may play a complementary role to the existing predictors in this area.
NASA Technical Reports Server (NTRS)
Wightman, J. M.
1973-01-01
Sequential band-6 imagery of the Zambesi Basin of southern Africa recorded substantial changes in burn patterns resulting from late dry season grass fires. One example from northern Botswana, indicates that a fire consumed approximately 70 square miles of grassland over a 24-hour period. Another example from western Zambia indicates increased fire activity over a 19-day period. Other examples clearly define the area of widespread grass fires in Angola, Botswana, Rhodesia and Zambia. From the fire patterns visible on the sequential portions of the imagery, and the time intervals involved, the rates of spread of the fires are estimated and compared with estimates derived from experimental burning plots in Zambia and Canada. It is concluded that sequential ERTS-1 imagery, of the quality studied, clearly provides the information needed to detect and map grass fires and to monitor their rates of spread in this region during the late dry season.
Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien
2017-03-15
Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.
Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua
2016-01-01
Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948
Cholesteric metronomes with flexoelectrically programmable amplitude
NASA Astrophysics Data System (ADS)
Joshi, Vinay; Varanytsia, A.; Chang, Kai-Han; Paterson, Daniel A.; Storey, John M. D.; Imrie, Corrie T.; Chien, Liang-Chy
2018-02-01
We experimentally demonstrate fast flexoelectro-optic switching in a liquid crystal cell containing bimesogen-doped and polymer-stabilized cholesteric. The device exhibits a response time of less than 0.7 ms and with low hysteresis and color dispersion which is suitable for potential applications including field-sequential color displays.
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
Learning Sequential Composition Control.
Najafi, Esmaeil; Babuska, Robert; Lopes, Gabriel A D
2016-11-01
Sequential composition is an effective supervisory control method for addressing control problems in nonlinear dynamical systems. It executes a set of controllers sequentially to achieve a control specification that cannot be realized by a single controller. As these controllers are designed offline, sequential composition cannot address unmodeled situations that might occur during runtime. This paper proposes a learning approach to augment the standard sequential composition framework by using online learning to handle unforeseen situations. New controllers are acquired via learning and added to the existing supervisory control structure. In the proposed setting, learning experiments are restricted to take place within the domain of attraction (DOA) of the existing controllers. This guarantees that the learning process is safe (i.e., the closed loop system is always stable). In addition, the DOA of the new learned controller is approximated after each learning trial. This keeps the learning process short as learning is terminated as soon as the DOA of the learned controller is sufficiently large. The proposed approach has been implemented on two nonlinear systems: 1) a nonlinear mass-damper system and 2) an inverted pendulum. The results show that in both cases a new controller can be rapidly learned and added to the supervisory control structure.
Optical design of system for a lightship
NASA Astrophysics Data System (ADS)
Chirkov, M. A.; Tsyganok, E. A.
2017-06-01
This article presents the result of the optical design of illuminating optical system for lightship using the freeform surface. It shows an algorithm of optical design of side-emitting lens for point source using Freeform Z function in Zemax non-sequential mode; optimization of calculation results and testing of optical system with real diode
Sequential two-column electro-Fenton-photolytic reactor for the treatment of winery wastewater.
Díez, A M; Sanromán, M A; Pazos, M
2017-01-01
The high amount of winery wastewaters produced each year makes their treatment a priority issue due to their problematic characteristics such as acid pH, high concentration of organic load and colourful compounds. Furthermore, some of these effluents can have dissolved pesticides, due to the previous grape treatments, which are recalcitrant to conventional treatments. Recently, photo-electro-Fenton process has been reported as an effective procedure to mineralize different organic contaminants and a promising technology for the treatment of these complex matrixes. However, the reactors available for applying this process are scarce and they show several limitations. In this study, a sequential two-column reactor for the photo-electro-Fenton treatment was designed and evaluated for the treatment of different pesticides, pirimicarb and pyrimethanil, used in wine production. Both studied pesticides were efficiently removed, and the transformation products were determined. Finally, the treatment of a complex aqueous matrix composed by winery wastewater and the previously studied pesticides was carried out in the designed sequential reactor. The high removals of TOC and COD reached and the low energy consumption demonstrated the efficiency of this new configuration.
NASA DOE POD NDE Capabilities Data Book
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2015-01-01
This data book contains the Directed Design of Experiments for Validating Probability of Detection (POD) Capability of NDE Systems (DOEPOD) analyses of the nondestructive inspection data presented in the NTIAC, Nondestructive Evaluation (NDE) Capabilities Data Book, 3rd ed., NTIAC DB-97-02. DOEPOD is designed as a decision support system to validate inspection system, personnel, and protocol demonstrating 0.90 POD with 95% confidence at critical flaw sizes, a90/95. The test methodology used in DOEPOD is based on the field of statistical sequential analysis founded by Abraham Wald. Sequential analysis is a method of statistical inference whose characteristic feature is that the number of observations required by the procedure is not determined in advance of the experiment. The decision to terminate the experiment depends, at each stage, on the results of the observations previously made. A merit of the sequential method, as applied to testing statistical hypotheses, is that test procedures can be constructed which require, on average, a substantially smaller number of observations than equally reliable test procedures based on a predetermined number of observations.
Multi-arm group sequential designs with a simultaneous stopping rule.
Urach, S; Posch, M
2016-12-30
Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Qi, Wenqiang; Chen, Taojing; Wang, Liang; Wu, Minghong; Zhao, Quanyu; Wei, Wei
2017-03-01
In this study, the sequential process of anaerobic fermentation followed by microalgae cultivation was evaluated from both nutrient and energy recovery standpoints. The effects of different fermentation type on the biogas generation, broth metabolites' composition, algal growth and nutrients' utilization, and energy conversion efficiencies for the whole processes were discussed. When the fermentation was designed to produce hydrogen-dominating biogas, the total energy conversion efficiency (TECE) of the sequential process was higher than that of the methane fermentation one. With the production of hydrogen in anaerobic fermentation, more organic carbon metabolites were left in the broth to support better algal growth with more efficient incorporation of ammonia nitrogen. By applying the sequential process, the heat value conversion efficiency (HVCE) for the wastewater could reach 41.2%, if methane was avoided in the fermentation biogas. The removal efficiencies of organic metabolites and NH 4 + -N in the better case were 100% and 98.3%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sequential Feedback Scheme Outperforms the Parallel Scheme for Hamiltonian Parameter Estimation.
Yuan, Haidong
2016-10-14
Measurement and estimation of parameters are essential for science and engineering, where the main quest is to find the highest achievable precision with the given resources and design schemes to attain it. Two schemes, the sequential feedback scheme and the parallel scheme, are usually studied in the quantum parameter estimation. While the sequential feedback scheme represents the most general scheme, it remains unknown whether it can outperform the parallel scheme for any quantum estimation tasks. In this Letter, we show that the sequential feedback scheme has a threefold improvement over the parallel scheme for Hamiltonian parameter estimations on two-dimensional systems, and an order of O(d+1) improvement for Hamiltonian parameter estimation on d-dimensional systems. We also show that, contrary to the conventional belief, it is possible to simultaneously achieve the highest precision for estimating all three components of a magnetic field, which sets a benchmark on the local precision limit for the estimation of a magnetic field.
Robust Feedback Control of Flow Induced Structural Radiation of Sound
NASA Technical Reports Server (NTRS)
Heatwole, Craig M.; Bernhard, Robert J.; Franchek, Matthew A.
1997-01-01
A significant component of the interior noise of aircraft and automobiles is a result of turbulent boundary layer excitation of the vehicular structure. In this work, active robust feedback control of the noise due to this non-predictable excitation is investigated. Both an analytical model and experimental investigations are used to determine the characteristics of the flow induced structural sound radiation problem. The problem is shown to be broadband in nature with large system uncertainties associated with the various operating conditions. Furthermore the delay associated with sound propagation is shown to restrict the use of microphone feedback. The state of the art control methodologies, IL synthesis and adaptive feedback control, are evaluated and shown to have limited success for solving this problem. A robust frequency domain controller design methodology is developed for the problem of sound radiated from turbulent flow driven plates. The control design methodology uses frequency domain sequential loop shaping techniques. System uncertainty, sound pressure level reduction performance, and actuator constraints are included in the design process. Using this design method, phase lag was added using non-minimum phase zeros such that the beneficial plant dynamics could be used. This general control approach has application to lightly damped vibration and sound radiation problems where there are high bandwidth control objectives requiring a low controller DC gain and controller order.
Sample Size Calculations for Micro-randomized Trials in mHealth
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A.
2015-01-01
The use and development of mobile interventions are experiencing rapid growth. In “just-in-time” mobile interventions, treatments are provided via a mobile device and they are intended to help an individual make healthy decisions “in the moment,” and thus have a proximal, near future impact. Currently the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a “micro-randomized” trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. PMID:26707831
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2010 CFR
2010-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2011 CFR
2011-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
Sequential establishment of stripe patterns in an expanding cell population.
Liu, Chenli; Fu, Xiongfei; Liu, Lizhong; Ren, Xiaojing; Chau, Carlos K L; Li, Sihong; Xiang, Lu; Zeng, Hualing; Chen, Guanhua; Tang, Lei-Han; Lenz, Peter; Cui, Xiaodong; Huang, Wei; Hwa, Terence; Huang, Jian-Dong
2011-10-14
Periodic stripe patterns are ubiquitous in living organisms, yet the underlying developmental processes are complex and difficult to disentangle. We describe a synthetic genetic circuit that couples cell density and motility. This system enabled programmed Escherichia coli cells to form periodic stripes of high and low cell densities sequentially and autonomously. Theoretical and experimental analyses reveal that the spatial structure arises from a recurrent aggregation process at the front of the continuously expanding cell population. The number of stripes formed could be tuned by modulating the basal expression of a single gene. The results establish motility control as a simple route to establishing recurrent structures without requiring an extrinsic pacemaker.
Sequential measurement of conjugate variables as an alternative quantum state tomography.
Di Lorenzo, Antonio
2013-01-04
It is shown how it is possible to reconstruct the initial state of a one-dimensional system by sequentially measuring two conjugate variables. The procedure relies on the quasicharacteristic function, the Fourier transform of the Wigner quasiprobability. The proper characteristic function obtained by Fourier transforming the experimentally accessible joint probability of observing "position" then "momentum" (or vice versa) can be expressed as a product of the quasicharacteristic function of the two detectors and that unknown of the quantum system. This allows state reconstruction through the sequence (1) data collection, (2) Fourier transform, (3) algebraic operation, and (4) inverse Fourier transform. The strength of the measurement should be intermediate for the procedure to work.
Hall, David B; Meier, Ulrich; Diener, Hans-Cristoph
2005-06-01
The trial objective was to test whether a new mechanism of action would effectively treat migraine headaches and to select a dose range for further investigation. The motivation for a group sequential, adaptive, placebo-controlled trial design was (1) limited information about where across the range of seven doses to focus attention, (2) a need to limit sample size for a complicated inpatient treatment and (3) a desire to reduce exposure of patients to ineffective treatment. A design based on group sequential and up and down designs was developed and operational characteristics were explored by trial simulation. The primary outcome was headache response at 2 h after treatment. Groups of four treated and two placebo patients were assigned to one dose. Adaptive dose selection was based on response rates of 60% seen with other migraine treatments. If more than 60% of treated patients responded, then the next dose was the next lower dose; otherwise, the dose was increased. A stopping rule of at least five groups at the target dose and at least four groups at that dose with more than 60% response was developed to ensure that a selected dose would be statistically significantly (p=0.05) superior to placebo. Simulations indicated good characteristics in terms of control of type 1 error, sufficient power, modest expected sample size and modest bias in estimation. The trial design is attractive for phase 2 clinical trials when response is acute and simple, ideally binary, placebo comparator is required, and patient accrual is relatively slow allowing for the collection and processing of results as a basis for the adaptive assignment of patients to dose groups. The acute migraine trial based on this design was successful in both proof of concept and dose range selection.
Discovering the Sequential Structure of Thought
ERIC Educational Resources Information Center
Anderson, John R.; Fincham, Jon M.
2014-01-01
Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…
Many cases of environmental contamination result in concurrent or sequential exposure to more than one chemical. However, limitations of available resources make it unlikely that experimental toxicology will provide health risk information about all the possible mixtures to which...
Dismal: A Spreadsheet for Sequential Data Analysis and HCI Experimentation
2002-01-24
Hambly, Alder, Wyatt- Millington, Shrayane, Crawshaw , et al., 1996). Table 2 provides some example data. An automatically generated header comes first...Shrayane, N. M., Crawshaw , C. M., & Hockey, G. R. J. (1996). Investigating the human-computer interface using the Datalogger. Behavior Research Methods, Instruments, & Computers, 28(4), 603-606.
Nonlinear interferometry approach to photonic sequential logic
NASA Astrophysics Data System (ADS)
Mabuchi, Hideo
2011-10-01
Motivated by rapidly advancing capabilities for extensive nanoscale patterning of optical materials, I propose an approach to implementing photonic sequential logic that exploits circuit-scale phase coherence for efficient realizations of fundamental components such as a NAND-gate-with-fanout and a bistable latch. Kerr-nonlinear optical resonators are utilized in combination with interference effects to drive the binary logic. Quantum-optical input-output models are characterized numerically using design parameters that yield attojoule-scale energy separation between the latch states.
ERIC Educational Resources Information Center
Cunningham, Jennifer L.
2013-01-01
The purpose of this sequential, explanatory mixed methods research study was to understand what factors influenced African American maternal intentions to get their daughters aged 9 years to 12 years vaccinated in Alabama. In the first, quantitative phase of the study, the research questions focused on identifying the predictive power of eleven…
Alfawal, Alaa M H; Hajeer, Mohammad Y; Ajaj, Mowaffak A; Hamadah, Omar; Brad, Bassel
2018-02-17
To evaluate the effectiveness of two minimally invasive surgical procedures in the acceleration of canine retraction: piezocision and laser-assisted flapless corticotomy (LAFC). Trial design: A single-centre randomized controlled trial with a compound design (two-arm parallel-group design and a split-mouth design for each arm). 36 Class II division I patients (12 males, 24 females; age range: 15 to 27 years) requiring first upper premolars extraction followed by canine retraction. piezocision group (PG; n = 18) and laser-assisted flapless corticotomy group (LG; n = 18). A split-mouth design was applied for each group where the flapless surgical intervention was randomly allocated to one side and the other side served as a control side. the rate of canine retraction (primary outcome), anchorage loss and canine rotation, which were assessed at 1, 2, 3 and 4 months following the onset of canine retraction. Also the duration of canine retraction was recorded. Random sequence: Computer-generated random numbers. Allocation concealment: sequentially numbered, opaque, sealed envelopes. Blinding: Single blinded (outcomes' assessor). Seventeen patients in each group were enrolled in the statistical analysis. The rate of canine retraction was significantly greater in the experimental side than in the control side in both groups by two-fold in the first month and 1.5-fold in the second month (p < 0.001). Also the overall canine retraction duration was significantly reduced in the experimental side as compared with control side in both groups about 25% (p ≤ 0.001). There were no significant differences between the experimental and the control sides regarding loss of anchorage and upper canine rotation in both groups (p > 0.05). There were no significant differences between the two flapless techniques regarding the studied variables during all evaluation times (p > 0.05). Piezocision and laser-assisted flapless corticotomy appeared to be effective treatment methods for accelerating canine retraction without any significant untoward effect on anchorage or canine rotation during rapid retraction. ClinicalTrials.gov (Identifier: NCT02606331 ).
Preparing the Teacher of Tomorrow
ERIC Educational Resources Information Center
Hemp, Paul E.
1976-01-01
Suggested ways of planning and conducting high quality teacher preparation programs are discussed under major headings of student selection, sequential courses and experiences, and program design. (HD)
NASA Astrophysics Data System (ADS)
Jiao, J.; Trautz, A.; Zhang, Y.; Illangasekera, T.
2017-12-01
Subsurface flow and transport characterization under data-sparse condition is addressed by a new and computationally efficient inverse theory that simultaneously estimates parameters, state variables, and boundary conditions. Uncertainty in static data can be accounted for while parameter structure can be complex due to process uncertainty. The approach has been successfully extended to inverting transient and unsaturated flows as well as contaminant source identification under unknown initial and boundary conditions. In one example, by sampling numerical experiments simulating two-dimensional steady-state flow in which tracer migrates, a sequential inversion scheme first estimates the flow field and permeability structure before the evolution of tracer plume and dispersivities are jointly estimated. Compared to traditional inversion techniques, the theory does not use forward simulations to assess model-data misfits, thus the knowledge of the difficult-to-determine site boundary condition is not required. To test the general applicability of the theory, data generated during high-precision intermediate-scale experiments (i.e., a scale intermediary to the field and column scales) in large synthetic aquifers can be used. The design of such experiments is not trivial as laboratory conditions have to be selected to mimic natural systems in order to provide useful data, thus requiring a variety of sensors and data collection strategies. This paper presents the design of such an experiment in a synthetic, multi-layered aquifer with dimensions of 242.7 x 119.3 x 7.7 cm3. Different experimental scenarios that will generate data to validate the theory are presented.
Chen, Zhen; Geng, Feng; Zeng, An-Ping
2015-02-01
Protein engineering to expand the substrate spectrum of native enzymes opens new possibilities for bioproduction of valuable chemicals from non-natural pathways. No natural microorganism can directly use sugars to produce 1,3-propanediol (PDO). Here, we present a de novo route for the biosynthesis of PDO from sugar, which may overcome the mentioned limitations by expanding the homoserine synthesis pathway. The accomplishment of pathway from homoserine to PDO is achieved by protein engineering of glutamate dehydrogenase (GDH) and pyruvate decarboxylase to sequentially convert homoserine to 4-hydroxy-2-ketobutyrate and 3-hydroxypropionaldehyde. The latter is finally converted to PDO by using a native alcohol dehydrogenase. In this work, we report on experimental accomplishment of this non-natural pathway, especially by protein engineering of GDH for the key step of converting homoserine to 4-hydroxy-2-ketobutyrate. These results show the feasibility and significance of protein engineering for de novo pathway design and overproduction of desired industrial products. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Novel cylindrical illuminator tip for ultraviolet light delivery
NASA Astrophysics Data System (ADS)
Shangguan, HanQun; Haw, Thomas E.; Gregory, Kenton W.; Casperson, Lee W.
1993-06-01
The design, processing, and sequential testing of a novel cylindrical diffusing optical fiber tip for ultraviolet light delivery is described. This device has been shown to uniformly (+/- 15%) illuminate angioplasty balloons, 20 mm in length, that are used in an experimental photochemotherapeutic treatment of swine intimal hyperplasia. Our experiments show that uniform diffusing tips of < 400 micron diameter can be reliably constructed for this and other interstitial applications. Modeling results indicate that this design is scalable to smaller diameters. The diffusing tips are made by stripping the protective buffer and etching away the cladding over a length of 20 mm from the fiber tip and replacing it with a thin layer of optical epoxy mixed with Al2O3 powder. To improve the uniformity and ease of fabrication, we have evaluated a new device configuration where the tip is etched into a modified conical shape, and the distal end face is polished and then coated with an optically opaque epoxy. This is shown to uniformly scatter approximately 70% of the light launched into the fiber without forward transmission.
Cost Optimal Design of a Power Inductor by Sequential Gradient Search
NASA Astrophysics Data System (ADS)
Basak, Raju; Das, Arabinda; Sanyal, Amarnath
2018-05-01
Power inductors are used for compensating VAR generated by long EHV transmission lines and in electronic circuits. For the EHV-lines, the rating of the inductor is decided upon by techno-economic considerations on the basis of the line-susceptance. It is a high voltage high current device, absorbing little active power and large reactive power. The cost is quite high- hence the design should be made cost-optimally. The 3-phase power inductor is similar in construction to a 3-phase core-type transformer with the exception that it has only one winding per phase and each limb is provided with an air-gap, the length of which is decided upon by the inductance required. In this paper, a design methodology based on sequential gradient search technique and the corresponding algorithm leading to cost-optimal design of a 3-phase EHV power inductor has been presented. The case-study has been made on a 220 kV long line of NHPC running from Chukha HPS to Birpara of Coochbihar.
Spacecraft Data Simulator for the test of level zero processing systems
NASA Technical Reports Server (NTRS)
Shi, Jeff; Gordon, Julie; Mirchandani, Chandru; Nguyen, Diem
1994-01-01
The Microelectronic Systems Branch (MSB) at Goddard Space Flight Center (GSFC) has developed a Spacecraft Data Simulator (SDS) to support the development, test, and verification of prototype and production Level Zero Processing (LZP) systems. Based on a disk array system, the SDS is capable of generating large test data sets up to 5 Gigabytes and outputting serial test data at rates up to 80 Mbps. The SDS supports data formats including NASA Communication (Nascom) blocks, Consultative Committee for Space Data System (CCSDS) Version 1 & 2 frames and packets, and all the Advanced Orbiting Systems (AOS) services. The capability to simulate both sequential and non-sequential time-ordered downlink data streams with errors and gaps is crucial to test LZP systems. This paper describes the system architecture, hardware and software designs, and test data designs. Examples of test data designs are included to illustrate the application of the SDS.
Collaborative, Sequential and Isolated Decisions in Design
NASA Technical Reports Server (NTRS)
Lewis, Kemper; Mistree, Farrokh
1997-01-01
The Massachusetts Institute of Technology (MIT) Commission on Industrial Productivity, in their report Made in America, found that six recurring weaknesses were hampering American manufacturing industries. The two weaknesses most relevant to product development were 1) technological weakness in development and production, and 2) failures in cooperation. The remedies to these weaknesses are considered the essential twin pillars of CE: 1) improved development process, and 2) closer cooperation. In the MIT report, it is recognized that total cooperation among teams in a CE environment is rare in American industry, while the majority of the design research in mathematically modeling CE has assumed total cooperation. In this paper, we present mathematical constructs, based on game theoretic principles, to model degrees of collaboration characterized by approximate cooperation, sequential decision making and isolation. The design of a pressure vessel and a passenger aircraft are included as illustrative examples.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
ADS: A FORTRAN program for automated design synthesis: Version 1.10
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1985-01-01
A new general-purpose optimization program for engineering design is described. ADS (Automated Design Synthesis - Version 1.10) is a FORTRAN program for solution of nonlinear constrained optimization problems. The program is segmented into three levels: strategy, optimizer, and one-dimensional search. At each level, several options are available so that a total of over 100 possible combinations can be created. Examples of available strategies are sequential unconstrained minimization, the Augmented Lagrange Multiplier method, and Sequential Linear Programming. Available optimizers include variable metric methods and the Method of Feasible Directions as examples, and one-dimensional search options include polynomial interpolation and the Golden Section method as examples. Emphasis is placed on ease of use of the program. All information is transferred via a single parameter list. Default values are provided for all internal program parameters such as convergence criteria, and the user is given a simple means to over-ride these, if desired.
Students' Preferences and Opinions on Design of a Mobile Marketing Education Application
ERIC Educational Resources Information Center
Ozata, Zeynep; Ozdama Keskin, Nilgun
2014-01-01
The purpose of this study was to define and better understand business school students' opinions and preferences on the design of a mobile marketing education application. To accomplish this purpose an explanatory mixed methods study design was used and the data was collected sequentially. First, a questionnaire was conducted with 168 business…
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Numerical simulation of double‐diffusive finger convection
Hughes, Joseph D.; Sanford, Ward E.; Vacher, H. Leonard
2005-01-01
A hybrid finite element, integrated finite difference numerical model is developed for the simulation of double‐diffusive and multicomponent flow in two and three dimensions. The model is based on a multidimensional, density‐dependent, saturated‐unsaturated transport model (SUTRA), which uses one governing equation for fluid flow and another for solute transport. The solute‐transport equation is applied sequentially to each simulated species. Density coupling of the flow and solute‐transport equations is accounted for and handled using a sequential implicit Picard iterative scheme. High‐resolution data from a double‐diffusive Hele‐Shaw experiment, initially in a density‐stable configuration, is used to verify the numerical model. The temporal and spatial evolution of simulated double‐diffusive convection is in good agreement with experimental results. Numerical results are very sensitive to discretization and correspond closest to experimental results when element sizes adequately define the spatial resolution of observed fingering. Numerical results also indicate that differences in the molecular diffusivity of sodium chloride and the dye used to visualize experimental sodium chloride concentrations are significant and cause inaccurate mapping of sodium chloride concentrations by the dye, especially at late times. As a result of reduced diffusion, simulated dye fingers are better defined than simulated sodium chloride fingers and exhibit more vertical mass transfer.
GilPavas, Edison; Dobrosz-Gómez, Izabela; Gómez-García, Miguel Ángel
2017-04-15
In this study, the industrial textile wastewater was treated using a chemical-based technique (coagulation-flocculation, C-F) sequential with an advanced oxidation process (AOP: Fenton or Photo-Fenton). During the C-F, Al 2 (SO 4 ) 3 was used as coagulant and its optimal dose was determined using the jar test. The following operational conditions of C-F, maximizing the organic matter removal, were determined: 700 mg/L of Al 2 (SO 4 ) 3 at pH = 9.96. Thus, the C-F allowed to remove 98% of turbidity, 48% of Chemical Oxygen Demand (COD), and let to increase in the BOD 5 /COD ratio from 0.137 to 0.212. Subsequently, the C-F effluent was treated using each of AOPs. Their performances were optimized by the Response Surface Methodology (RSM) coupled with a Box-Behnken experimental design (BBD). The following optimal conditions of both Fenton (Fe 2+ /H 2 O 2 ) and Photo-Fenton (Fe 2+ /H 2 O 2 /UV) processes were found: Fe 2+ concentration = 1 mM, H 2 O 2 dose = 2 mL/L (19.6 mM), and pH = 3. The combination of C-F pre-treatment with the Fenton reagent, at optimized conditions, let to remove 74% of COD during 90 min of the process. The C-F sequential with Photo-Fenton process let to reach 87% of COD removal, in the same time. Moreover, the BOD 5 /COD ratio increased from 0.212 to 0.68 and from 0.212 to 0.74 using Fenton and Photo-Fenton processes, respectively. Thus, the enhancement of biodegradability with the physico-chemical treatment was proved. The depletion of H 2 O 2 was monitored during kinetic study. Strategies for improving the reaction efficiency, based on the H 2 O 2 evolution, were also tested. Copyright © 2017 Elsevier Ltd. All rights reserved.
Solution Exchange Lithography: A Versatile Tool for Sequential Surface Engineering
NASA Astrophysics Data System (ADS)
Pester, Christian; Mattson, Kaila; Bothman, David; Klinger, Daniel; Lee, Kenneth; Discekici, Emre; Narupai, Benjaporn; Hawker, Craig
The covalent attachment of polymers has emerged as a viable strategy for the preparation of multi-functional surfaces. Patterned, surface-grafted polymer brushes provide spatial control over wetting, mechanical, biological or electronic properties, and allow fabrication of `intelligent' substrates which selectively adapt to their environment. However, the route towards patterned polymer brush surfaces often remains challenging, creating a demand for more efficient and less complicated fabrication strategies. We describe the design and application of a novel experimental setup to combine light-mediated and flow chemistry for the fabrication of hierarchical surface-grafted polymer brushes. Using light-mediated, surface initiated controlled radical polymerization and post-functionalization via well-established, and highly efficient chemistries, polymer brush films of previously unimaginable complexity are now shown to be accessible. This methodology allows full flexibility to exchange both lithographic photomasks and chemical environments in-situ, readily affording multidimensional thin film architectures, all from uniformly functionalized substrates.
The development and functional control of reading-comprehension behavior.
Rosenbaum, M S; Breiling, J
1976-01-01
Reading comprehension, indicated by motor behavior and multiple-choice picture selection called for in written instructions, was taught to an autistic child using verbal prompts, modelling, and physical guidance. The child was rewarded for correct behaviors to training items; nonrewarded probes were used to assess generalization. Probable maintaining events were assessed through their sequential removal in a reversal design. Results showed: (a) following acquisition, performance was maintained at a near-100% level when candy, praise, attention, and training were removed, (b) absence of other persons was correlated with a marked decrease in performance, whereas their presence was associated with performance at near 100%, and (c) performance generalized to probes and across experimenters. Rewards, which may have been reinforcing during acquisition, did not appear necessary to maintain later performance. Instead, presence of others (a setting event) was demonstrated to have control over maintained performance.
Recovery of zinc and manganese from alkaline and zinc-carbon spent batteries
NASA Astrophysics Data System (ADS)
De Michelis, I.; Ferella, F.; Karakaya, E.; Beolchini, F.; Vegliò, F.
This paper concerns the recovery of zinc and manganese from alkaline and zinc-carbon spent batteries. The metals were dissolved by a reductive-acid leaching with sulphuric acid in the presence of oxalic acid as reductant. Leaching tests were realised according to a full factorial design, then simple regression equations for Mn, Zn and Fe extraction were determined from the experimental data as a function of pulp density, sulphuric acid concentration, temperature and oxalic acid concentration. The main effects and interactions were investigated by the analysis of variance (ANOVA). This analysis evidenced the best operating conditions of the reductive acid leaching: 70% of manganese and 100% of zinc were extracted after 5 h, at 80 °C with 20% of pulp density, 1.8 M sulphuric acid concentration and 59.4 g L -1 of oxalic acid. Both manganese and zinc extraction yields higher than 96% were obtained by using two sequential leaching steps.
A novel processing platform for post tape out flows
NASA Astrophysics Data System (ADS)
Vu, Hien T.; Kim, Soohong; Word, James; Cai, Lynn Y.
2018-03-01
As the computational requirements for post tape out (PTO) flows increase at the 7nm and below technology nodes, there is a need to increase the scalability of the computational tools in order to reduce the turn-around time (TAT) of the flows. Utilization of design hierarchy has been one proven method to provide sufficient partitioning to enable PTO processing. However, as the data is processed through the PTO flow, its effective hierarchy is reduced. The reduction is necessary to achieve the desired accuracy. Also, the sequential nature of the PTO flow is inherently non-scalable. To address these limitations, we are proposing a quasi-hierarchical solution that combines multiple levels of parallelism to increase the scalability of the entire PTO flow. In this paper, we describe the system and present experimental results demonstrating the runtime reduction through scalable processing with thousands of computational cores.
NASA Astrophysics Data System (ADS)
Yang, Chao; Song, Jian; Li, Liang; Li, Shengbo; Cao, Dongpu
2016-08-01
This paper presents an economical launching and accelerating mode, including four ordered phases: pure electrical driving, clutch engagement and engine start-up, engine active charging, and engine driving, which can be fit for the alternating conditions and improve the fuel economy of hybrid electric bus (HEB) during typical city-bus driving scenarios. By utilizing the fast response feature of electric motor (EM), an adaptive controller for EM is designed to realize the power demand during the pure electrical driving mode, the engine starting mode and the engine active charging mode. Concurrently, the smoothness issue induced by the sequential mode transitions is solved with a coordinated control logic for engine, EM and clutch. Simulation and experimental results show that the proposed launching and accelerating mode and its control methods are effective in improving the fuel economy and ensure the drivability during the fast transition between the operation modes of HEB.
Erosion and flow of hydrophobic granular materials
NASA Astrophysics Data System (ADS)
Utter, Brian; Benns, Thomas; Mahler, Joseph
2013-11-01
We experimentally investigate submerged granular flows of hydrophobic and hydrophilic grains both in a rotating drum geometry and under erosion by a surface water flow. While slurry and suspension flows are common in nature and industry, effects of surface chemistry on flow behavior have received relatively little attention. In the rotating drum , we use varying concentrations of hydrophobic and hydrophilic grains of sand submerged in water rotated at a constant angular velocity. Sequential images of the resulting avalanches are taken and analyzed. High concentrations of hydrophobic grains result in an effectively cohesive interaction between the grains forming aggregates, with aggregate size and repose angle increasing with hydrophobic concentration. However, the formation and nature of the aggregates depends significantly on the presence of air in the system. We present results from a related experiment on erosion by a surface water flow designed to characterize the effects of heterogeneous granular surfaces on channelization and erosion. Supported by NSF CBET Award 1067598.
Erosion and flow of hydrophobic granular materials
NASA Astrophysics Data System (ADS)
Utter, Brian; Benns, Thomas; Foltz, Benjamin; Mahler, Joseph
2015-03-01
We experimentally investigate submerged granular flows of hydrophobic and hydrophilic grains both in a rotating drum geometry and under erosion by a surface water flow. While slurry and suspension flows are common in nature and industry, effects of surface chemistry on flow behavior have received relatively little attention. In the rotating drum, we use varying concentrations of hydrophobic and hydrophilic grains of sand submerged in water rotated at a constant angular velocity. Sequential images of the resulting avalanches are taken and analyzed. High concentrations of hydrophobic grains result in an effectively cohesive interaction between the grains forming aggregates, with aggregate size and repose angle increasing with hydrophobic concentration. However, the formation and nature of the aggregates depends significantly on the presence of air in the system. We present results from a related experiment on erosion by a surface water flow designed to characterize the effects of heterogeneous granular surfaces on channelization and erosion.
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
Commercial Art I and Commercial Art II: An Instructional Guide.
ERIC Educational Resources Information Center
Montgomery County Public Schools, Rockville, MD.
A teacher's guide for two sequential one-year commercial art courses for high school students is presented. Commercial Art I contains three units: visual communication, product design, and environmental design. Students study visual communication by analyzing advertising techniques, practicing fundamental drawing and layout techniques, creating…
Fracture Resistance of Retreated Roots Using Different Retreatment Systems
Er, Kursat; Tasdemir, Tamer; Siso, Seyda Herguner; Celik, Davut; Cora, Sabri
2011-01-01
Objectives: This study was designed to evaluate the fracture resistance of retreated roots using different rotary retreatment systems. Methods: Forty eight freshly extracted human canine teeth with single straight root canals were instrumented sequentially increasing from size 30 to a size 55 using K-files whit a stepback technique. The teeth were randomly divided into three experimental and one control groups of 12 specimens each. The root canals were filled using cold lateral compaction of gutta-percha and AH Plus (Dentsply Detrey, Konstanz, Germany) sealer in experimental groups. Removal of gutta-percha was performed with the following devices and techniques: ProTaper Universal (Dentsply Maillefer, Ballaigues, Switzerland), R-Endo (Micro-Mega, Besançon, France), and Mtwo (Sweden & Martina, Padova, Italy) rotary retreatment systems. Control group specimens were only instrumented, not filled or retreated. The specimens were then mounted in copper rings, were filled with a self-curing polymethylmethacrylate resin, and the force required to cause vertical root fracture was measured using a universal testing device. The force of fracture of the roots was recorded and the results in the various groups were compared. Statistical analysis was accomplished by one-way ANOVA and a post hoc Tukey tests. Results: There were statistically significant differences between the control and experimental groups (P<.05). However, there were no significant differences among the experimental groups. Conclusions: Based on the results, all rotary retreatment techniques used in this in vitro study produced similar root weakness. PMID:21912497
Schittek Janda, M; Tani Botticelli, A; Mattheos, N; Nebel, D; Wagner, A; Nattestad, A; Attström, R
2005-05-01
Video-based instructions for clinical procedures have been used frequently during the preceding decades. To investigate in a randomised controlled trial the learning effectiveness of fragmented videos vs. the complete sequential video and to analyse the attitudes of the user towards video as a learning aid. An instructional video on surgical hand wash was produced. The video was available in two different forms in two separate web pages: one as a sequential video and one fragmented into eight short clips. Twenty-eight dental students in the second semester were randomised into an experimental (n = 15) and a control group (n = 13). The experimental group used the fragmented form of the video and the control group watched the complete one. The use of the videos was logged and the students were video taped whilst undertaking a test hand wash. The videos were analysed systematically and blindly by two independent clinicians. The students also performed a written test concerning learning outcome from the videos as well as they answered an attitude questionnaire. The students in the experimental group watched the video significantly longer than the control group. There were no significant differences between the groups with regard to the ratings and scores when performing the hand wash. The experimental group had significantly better results in the written test compared with those of the control group. There was no significant difference between the groups with regard to attitudes towards the use of video for learning, as measured by the Visual Analogue Scales. Most students in both groups expressed satisfaction with the use of video for learning. The students demonstrated positive attitudes and acceptable learning outcome from viewing CAL videos as a part of their pre-clinical training. Videos that are part of computer-based learning settings would ideally be presented to the students both as a segmented and as a whole video to give the students the option to choose the form of video which suits the individual student's learning style.
Paliwoda, Rebecca E; Li, Feng; Reid, Michael S; Lin, Yanwen; Le, X Chris
2014-06-17
Functionalizing nanomaterials for diverse analytical, biomedical, and therapeutic applications requires determination of surface coverage (or density) of DNA on nanomaterials. We describe a sequential strand displacement beacon assay that is able to quantify specific DNA sequences conjugated or coconjugated onto gold nanoparticles (AuNPs). Unlike the conventional fluorescence assay that requires the target DNA to be fluorescently labeled, the sequential strand displacement beacon method is able to quantify multiple unlabeled DNA oligonucleotides using a single (universal) strand displacement beacon. This unique feature is achieved by introducing two short unlabeled DNA probes for each specific DNA sequence and by performing sequential DNA strand displacement reactions. Varying the relative amounts of the specific DNA sequences and spacing DNA sequences during their coconjugation onto AuNPs results in different densities of the specific DNA on AuNP, ranging from 90 to 230 DNA molecules per AuNP. Results obtained from our sequential strand displacement beacon assay are consistent with those obtained from the conventional fluorescence assays. However, labeling of DNA with some fluorescent dyes, e.g., tetramethylrhodamine, alters DNA density on AuNP. The strand displacement strategy overcomes this problem by obviating direct labeling of the target DNA. This method has broad potential to facilitate more efficient design and characterization of novel multifunctional materials for diverse applications.
Cressman, Erik N K; Shenoi, Mithun M; Edelman, Theresa L; Geeslin, Matthew G; Hennings, Leah J; Zhang, Yan; Iaizzo, Paul A; Bischof, John C
2012-01-01
To investigate simultaneous and sequential injection thermochemical ablation in a porcine model, and compare them to sham and acid-only ablation. This IACUC-approved study involved 11 pigs in an acute setting. Ultrasound was used to guide placement of a thermocouple probe and coaxial device designed for thermochemical ablation. Solutions of 10 M acetic acid and NaOH were used in the study. Four injections per pig were performed in identical order at a total rate of 4 mL/min: saline sham, simultaneous, sequential, and acid only. Volume and sphericity of zones of coagulation were measured. Fixed specimens were examined by H&E stain. Average coagulation volumes were 11.2 mL (simultaneous), 19.0 mL (sequential) and 4.4 mL (acid). The highest temperature, 81.3°C, was obtained with simultaneous injection. Average temperatures were 61.1°C (simultaneous), 47.7°C (sequential) and 39.5°C (acid only). Sphericity coefficients (0.83-0.89) had no statistically significant difference among conditions. Thermochemical ablation produced substantial volumes of coagulated tissues relative to the amounts of reagents injected, considerably greater than acid alone in either technique employed. The largest volumes were obtained with sequential injection, yet this came at a price in one case of cardiac arrest. Simultaneous injection yielded the highest recorded temperatures and may be tolerated as well as or better than acid injection alone. Although this pilot study did not show a clear advantage for either sequential or simultaneous methods, the results indicate that thermochemical ablation is attractive for further investigation with regard to both safety and efficacy.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Sequential Changes in Alanine Metabolism Following Partial Hepatectomy in the Rat
1990-11-01
complete semipurified diet for 10 days be- libitum and the second subgroup was pair-fed with HX fore and after experimentation. 5 Food was removed rats. Nine...amino acid to form ing the ketogenic pathway. Indeed, reduced ketogene - pyruvate which can enter the tricarboxylic acid (TCA) sis after partial
Shift of Manual Preference in Right-Handers Following Unimanual Practice
ERIC Educational Resources Information Center
Teixeira, Luis Augusto; Teixeira, Maria Candida Tocci
2007-01-01
The effect of unimanual practice of the non-preferred hand on manual asymmetry and manual preference for sequential finger movements was evaluated in right-handers before, immediately after, and 30 days following practice. The results demonstrate that unimanual practice induced a persistent shift of manual preference for the experimental task in…
The Aggregation of Single-Case Results Using Hierarchical Linear Models
ERIC Educational Resources Information Center
Van den Noortgate, Wim; Onghena, Patrick
2007-01-01
To investigate the generalizability of the results of single-case experimental studies, evaluating the effect of one or more treatments, in applied research various simultaneous and sequential replication strategies are used. We discuss one approach for aggregating the results for single-cases: the use of hierarchical linear models. This approach…
ERIC Educational Resources Information Center
Blayney, Paul; Kalyuga, Slava; Sweller, John
2010-01-01
This study investigated interactions between the isolated-interactive elements effect and levels of learner expertise with first year undergraduate university accounting students. The isolated-interactive elements effect occurs when learning is facilitated by initially presenting elements of information sequentially in an isolated form rather than…
Descriptive and Experimental Analyses of Potential Precursors to Problem Behavior
Borrero, Carrie S.W; Borrero, John C
2008-01-01
We conducted descriptive observations of severe problem behavior for 2 individuals with autism to identify precursors to problem behavior. Several comparative probability analyses were conducted in addition to lag-sequential analyses using the descriptive data. Results of the descriptive analyses showed that the probability of the potential precursor was greater given problem behavior compared to the unconditional probability of the potential precursor. Results of the lag-sequential analyses showed a marked increase in the probability of a potential precursor in the 1-s intervals immediately preceding an instance of problem behavior, and that the probability of problem behavior was highest in the 1-s intervals immediately following an instance of the precursor. We then conducted separate functional analyses of problem behavior and the precursor to identify respective operant functions. Results of the functional analyses showed that both problem behavior and the precursor served the same operant functions. These results replicate prior experimental analyses on the relation between problem behavior and precursors and extend prior research by illustrating a quantitative method to identify precursors to more severe problem behavior. PMID:18468281
NASA Astrophysics Data System (ADS)
Maekawa, F.; Verzilov, Y. M.; Smith, D. L.; Ikeda, Y.
2000-12-01
Except for 3H and 14C, no radioactive nuclide is produced by neutron-induced reactions with lithium in lithium-containing materials such as Li 2O and Li 2CO 3. However, when the lithium-containing materials are irradiated by 14 MeV neutrons, radioactive 7Be is produced by sequential charged particle reactions (SCPR). In this study, we measured effective 7Be production cross-sections in several lithium-containing samples at 14 MeV: the cross-sections are in the order of μb. Estimation of the effective cross-sections is attempted, and the estimated values agreed well with the experimental data. It was shown that the 7Be activity in a unit volume of lithium-containing materials in D-T fusion reactors can exceed total activity of the same unit volume of the SiC structural material in a certain cooling time. Consequently, a careful consideration of the 7Be production by SCPR is required to assess radioactive inventories in lithium-containing D-T fusion blanket materials.
Imbs, Diane-Charlotte; El Cheikh, Raouf; Boyer, Arnaud; Ciccolini, Joseph; Mascaux, Céline; Lacarelle, Bruno; Barlesi, Fabrice; Barbolosi, Dominique; Benzekry, Sébastien
2018-01-01
Concomitant administration of bevacizumab and pemetrexed-cisplatin is a common treatment for advanced nonsquamous non-small cell lung cancer (NSCLC). Vascular normalization following bevacizumab administration may transiently enhance drug delivery, suggesting improved efficacy with sequential administration. To investigate optimal scheduling, we conducted a study in NSCLC-bearing mice. First, experiments demonstrated improved efficacy when using sequential vs. concomitant scheduling of bevacizumab and chemotherapy. Combining this data with a mathematical model of tumor growth under therapy accounting for the normalization effect, we predicted an optimal delay of 2.8 days between bevacizumab and chemotherapy. This prediction was confirmed experimentally, with reduced tumor growth of 38% as compared to concomitant scheduling, and prolonged survival (74 vs. 70 days). Alternate sequencing of 8 days failed in achieving a similar increase in efficacy, thus emphasizing the utility of modeling support to identify optimal scheduling. The model could also be a useful tool in the clinic to personally tailor regimen sequences. © 2017 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Spatiotemporal stochastic models for earth science and engineering applications
NASA Astrophysics Data System (ADS)
Luo, Xiaochun
1998-12-01
Spatiotemporal processes occur in many areas of earth sciences and engineering. However, most of the available theoretical tools and techniques of space-time daft processing have been designed to operate exclusively in time or in space, and the importance of spatiotemporal variability was not fully appreciated until recently. To address this problem, a systematic framework of spatiotemporal random field (S/TRF) models for geoscience/engineering applications is presented and developed in this thesis. The space-tune continuity characterization is one of the most important aspects in S/TRF modelling, where the space-time continuity is displayed with experimental spatiotemporal variograms, summarized in terms of space-time continuity hypotheses, and modelled using spatiotemporal variogram functions. Permissible spatiotemporal covariance/variogram models are addressed through permissibility criteria appropriate to spatiotemporal processes. The estimation of spatiotemporal processes is developed in terms of spatiotemporal kriging techniques. Particular emphasis is given to the singularity analysis of spatiotemporal kriging systems. The impacts of covariance, functions, trend forms, and data configurations on the singularity of spatiotemporal kriging systems are discussed. In addition, the tensorial invariance of universal spatiotemporal kriging systems is investigated in terms of the space-time trend. The conditional simulation of spatiotemporal processes is proposed with the development of the sequential group Gaussian simulation techniques (SGGS), which is actually a series of sequential simulation algorithms associated with different group sizes. The simulation error is analyzed with different covariance models and simulation grids. The simulated annealing technique honoring experimental variograms, is also proposed, providing a way of conditional simulation without the covariance model fitting which is prerequisite for most simulation algorithms. The proposed techniques were first applied for modelling of the pressure system in a carbonate reservoir, and then applied for modelling of springwater contents in the Dyle watershed. The results of these case studies as well as the theory suggest that these techniques are realistic and feasible.
Approximations for Quantitative Feedback Theory Designs
NASA Technical Reports Server (NTRS)
Henderson, D. K.; Hess, R. A.
1997-01-01
The computational requirements for obtaining the results summarized in the preceding section were very modest and were easily accomplished using computer-aided control system design software. Of special significance is the ability of the PDT to indicate a loop closure sequence for MIMO QFT designs that employ sequential loop closure. Although discussed as part of a 2 x 2 design, the PDT is obviously applicable to designs with a greater number of inputs and system responses.
Dark sequential Z ' portal: Collider and direct detection experiments
NASA Astrophysics Data System (ADS)
Arcadi, Giorgio; Campos, Miguel D.; Lindner, Manfred; Masiero, Antonio; Queiroz, Farinaldo S.
2018-02-01
We revisit the status of a Majorana fermion as a dark matter candidate when a sequential Z' gauge boson dictates the dark matter phenomenology. Direct dark matter detection signatures rise from dark matter-nucleus scatterings at bubble chamber and liquid xenon detectors, and from the flux of neutrinos from the Sun measured by the IceCube experiment, which is governed by the spin-dependent dark matter-nucleus scattering. On the collider side, LHC searches for dilepton and monojet + missing energy signals play an important role. The relic density and perturbativity requirements are also addressed. By exploiting the dark matter complementarity we outline the region of parameter space where one can successfully have a Majorana dark matter particle in light of current and planned experimental sensitivities.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
NASA Astrophysics Data System (ADS)
Shaw, Darren; Stone, Kevin; Ho, K. C.; Keller, James M.; Luke, Robert H.; Burns, Brian P.
2016-05-01
Forward looking ground penetrating radar (FLGPR) has the benefit of detecting objects at a significant standoff distance. The FLGPR signal is radiated over a large surface area and the radar signal return is often weak. Improving detection, especially for buried in road targets, while maintaining an acceptable false alarm rate remains to be a challenging task. Various kinds of features have been developed over the years to increase the FLGPR detection performance. This paper focuses on investigating the use of as many features as possible for detecting buried targets and uses the sequential feature selection technique to automatically choose the features that contribute most for improving performance. Experimental results using data collected at a government test site are presented.
A Proposed Conceptual Framework for Curriculum Design in Physical Fitness.
ERIC Educational Resources Information Center
Miller, Peter V.; Beauchamp, Larry S.
A physical fitness curriculum, designed to provide cumulative benefits in a sequential pattern, is based upon a framework of a conceptual structure. The curriculum's ultimate goal is the achievement of greater physiological efficiency through a holistic approach that would strengthen circulatory-respiratory, mechanical, and neuro-muscular…
Design and Development of the Aircraft Instrument Comprehension Program.
ERIC Educational Resources Information Center
Higgins, Norman C.
The Aircraft Instrument Comprehension (AIC) Program is a self-instructional program designed to teach undergraduate student pilots to read instruments that indicate the position of the aircraft in flight, based on sequential instructional stages of information, prompted practice, and unprompted practice. The program includes a 36-item multiple…
NASA Technical Reports Server (NTRS)
Reda, Daniel C.; Muratore, Joseph J., Jr.; Heineck, James T.
1993-01-01
Time and flow-direction responses of shearstress-sensitive liquid crystal coatings were explored experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing Schlieren system and recorded with a 1000 frame/sec color video camera. Liquid crystal responses to these changing-shear environments were then recorded with the same video system, documenting color-play response times equal to, or faster than, the time interval between sequential frames (i.e., 1 millisecond). For the flow-direction experiments, a planar test surface was exposed to equal-magnitude and known-direction surface shear stresses generated by both normal and tangential subsonic jet-impingement flows. Under shear, the sense of the angular displacement of the liquid crystal dispersed (reflected) spectrum was found to be a function of the instantaneous direction of the applied shear. This technique thus renders dynamic flow reversals or flow divergences visible over entire test surfaces at image recording rates up to 1 KHz. Extensions of the technique to visualize relatively small changes in surface shear stress direction appear feasible.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Huang, Wei; Gao, Yubo; Qi, Yafei; Hypervelocity Impact Research Center Team
2015-06-01
Laboratory-scaled oblique water entry experiments for the trajectory stability in the water column have been performed with four different nosed-projectiles at a range of velocities from 20m /s to 250 m /s . The slender projectiles are designed with flat, ogival, hemi-sperical, truncated-ogival noses to make comparisons on the trajectory deviation when they are launched at vertical and oblique impact angles (0°~25°). Two high-speed cameras that are positioned orthogonal to each other and normal to the column are employed to capture the entire process of projectiles' penetration. From the experimental results, the sequential images in two planes are presented to compare the trajectory deviation of different impact tests and the 3D trajectory models are extracted based on the location recorded by cameras. Considering the effect influenced by the impact velocities and noses of projectiles, it merited concluded that trajectory deviation is affected from most by impact angle, and least by impact velocities. Additionally, ogival projectiles tend to be more sensitive to oblique angle and experienced the largest attitude changing. National Natural Science Foundation of China (NO.: 11372088).
The effect of sequential information on consumers' willingness to pay for credence food attributes.
Botelho, A; Dinis, I; Lourenço-Gomes, L; Moreira, J; Costa Pinto, L; Simões, O
2017-11-01
The use of experimental methods to determine consumers' willingness to pay for "quality" food has been gaining importance in scientific research. In most of the empirical literature on this issue the experimental design starts with blind tasting, after which information is introduced. It is assumed that this approach allows consumers to elicit the real value that they attach to each of the features added through specific information. In this paper, the starting hypothesis is that this technique overestimates the weight of the features introduced by information in consumers' willingness to pay when compared to a real market situation, in which consumers are confronted with all the information at once. The data obtained through contingent valuation in an in-store setting was used to estimate a hedonic model aiming at assessing consumers' willingness to pay (WTP) for the feature "geographical origin of the variety" of pears and apples in different information scenarios: i) blind tasting followed by extrinsic information and ii) full information provided at once. The results show that, in fact, features are more valued when gradually added to background information than when consumers receive all the information from the beginning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Longhi, Daniel Angelo; Martins, Wiaslan Figueiredo; da Silva, Nathália Buss; Carciofi, Bruno Augusto Mattar; de Aragão, Gláucia Maria Falcão; Laurindo, João Borges
2017-01-02
In predictive microbiology, the model parameters have been estimated using the sequential two-step modeling (TSM) approach, in which primary models are fitted to the microbial growth data, and then secondary models are fitted to the primary model parameters to represent their dependence with the environmental variables (e.g., temperature). The Optimal Experimental Design (OED) approach allows reducing the experimental workload and costs, and the improvement of model identifiability because primary and secondary models are fitted simultaneously from non-isothermal data. Lactobacillus viridescens was selected to this study because it is a lactic acid bacterium of great interest to meat products preservation. The objectives of this study were to estimate the growth parameters of L. viridescens in culture medium from TSM and OED approaches and to evaluate both the number of experimental data and the time needed in each approach and the confidence intervals of the model parameters. Experimental data for estimating the model parameters with TSM approach were obtained at six temperatures (total experimental time of 3540h and 196 experimental data of microbial growth). Data for OED approach were obtained from four optimal non-isothermal profiles (total experimental time of 588h and 60 experimental data of microbial growth), two profiles with increasing temperatures (IT) and two with decreasing temperatures (DT). The Baranyi and Roberts primary model and the square root secondary model were used to describe the microbial growth, in which the parameters b and T min (±95% confidence interval) were estimated from the experimental data. The parameters obtained from TSM approach were b=0.0290 (±0.0020) [1/(h 0.5 °C)] and T min =-1.33 (±1.26) [°C], with R 2 =0.986 and RMSE=0.581, and the parameters obtained with the OED approach were b=0.0316 (±0.0013) [1/(h 0.5 °C)] and T min =-0.24 (±0.55) [°C], with R 2 =0.990 and RMSE=0.436. The parameters obtained from OED approach presented smaller confidence intervals and best statistical indexes than those from TSM approach. Besides, less experimental data and time were needed to estimate the model parameters with OED than TSM. Furthermore, the OED model parameters were validated with non-isothermal experimental data with great accuracy. In this way, OED approach is feasible and is a very useful tool to improve the prediction of microbial growth under non-isothermal condition. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Heyvaert, Mieke; Deleye, Maarten; Saenen, Lore; Van Dooren, Wim; Onghena, Patrick
2018-01-01
When studying a complex research phenomenon, a mixed methods design allows to answer a broader set of research questions and to tap into different aspects of this phenomenon, compared to a monomethod design. This paper reports on how a sequential equal status design (QUAN ? QUAL) was used to examine students' reasoning processes when solving…
Capsule Escape Tests - Wallops Island
1959-05-14
Caption: Off the pad abort shot at Wallops using Langley PARD designed full scale capsule with Recruit rocket and extended skirt main parachute. Shows sequential images of launch and capsule splashdown.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, K.M.
1992-10-01
Sequential indicator simulation (SIS) is a geostatistical technique designed to aid in the characterization of uncertainty about the structure or behavior of natural systems. This report discusses a simulation experiment designed to study the quality of uncertainty bounds generated using SIS. The results indicate that, while SIS may produce reasonable uncertainty bounds in many situations, factors like the number and location of available sample data, the quality of variogram models produced by the user, and the characteristics of the geologic region to be modeled, can all have substantial effects on the accuracy and precision of estimated confidence limits. It ismore » recommended that users of SIS conduct validation studies for the technique on their particular regions of interest before accepting the output uncertainty bounds.« less
Guieysse, Benoit; Norvill, Zane N
2014-02-28
When direct wastewater biological treatment is unfeasible, a cost- and resource-efficient alternative to direct chemical treatment consists of combining biological treatment with a chemical pre-treatment aiming to convert the hazardous pollutants into more biodegradable compounds. Whereas the principles and advantages of sequential treatment have been demonstrated for a broad range of pollutants and process configurations, recent progresses (2011-present) in the field provide the basis for refining assessment of feasibility, costs, and environmental impacts. This paper thus reviews recent real wastewater demonstrations at pilot and full scale as well as new process configurations. It also discusses new insights on the potential impacts of microbial community dynamics on process feasibility, design and operation. Finally, it sheds light on a critical issue that has not yet been properly addressed in the field: integration requires complex and tailored optimization and, of paramount importance to full-scale application, is sensitive to uncertainty and variability in the inputs used for process design and operation. Future research is therefore critically needed to improve process control and better assess the real potential of sequential chemical-biological processes for industrial wastewater treatment. Copyright © 2013 Elsevier B.V. All rights reserved.
PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.
MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S
2005-06-01
Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.
Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L
2018-04-03
Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.
Exogenous nitric oxide can control SIRS and downregulate NFkappaB.
Lozano, Francisco S; Barros, Marcello B; García-Criado, Francisco J; Gomez-Alonso, Alberto
2005-03-01
Nitric oxide (NO) participates in inflammation and affects almost all steps of its development. Several experimental studies have unveiled the beneficial effects of NO through modulation of the Systemic Inflammatory Response Syndrome (SIRS). In this sense, in the present work we attempted to evaluate the beneficial effects of exogenous NO and its levels of action (biochemical and cellular) in a model of SIRS induced by two sequential insults. Dacron graft implantation (first insult) and subsequent administration of Zymosan A (second insult) in Wistar rats. The animals were divided into four groups: 1) No manipulation (Basal); 2) Laparotomy (L) + mineral oil (Sham); 3) L + Graft-Zymosan (GZ) (Control); and 4) L + GZ + NO (Assay). Determinations: Survival, TNF-alpha, SOA, ICAM-1, and NFkappaB. The model established (Control) induced a mortality rate of 20%. Also, it significantly increased the levels of TNF-alpha (P <0.001) and SOA (P <0.01), ICAM-1 expression, and NFkappaB levels (P <0.05). Treatment with NO reduced mortality to 0%, significantly decreasing TNF-alpha (P <0.001) and SOA (P <0.01) levels, ICAM-1 expression, and NFkappaB levels (P <0.05). The exogenous administration of NO before the two sequential insults controlled SIRS at biochemical level (TNF-alpha, SOA) and at cellular level (transcription) in a lasting manner. The cascade-like interrelationship of both levels and the study design do not allow us the pinpoint the key to its modulation.
Wang, Yang; Wang, Lu; Tian, Tian; Hu, Xiaoya; Yang, Chun; Xu, Qin
2012-05-21
In this study, an automated sequential injection lab-on-valve (SI-LOV) system was designed for the on-line matrix removal and preconcentration of quercetin. Octadecyl functionalized magnetic silica nanoparticles were prepared and packed into the microcolumn of the LOV as adsorbents. After being adsorbed through hydrophobic interaction, the analyte was eluted and subsequently introduced into the electrochemical flow cell by voltammetric quantification. The main parameters affecting the performance of solid-phase extraction, such as sample pH and flow rate, eluent solution and volume, accumulation potential and accumulation time were investigated in detail. Under the optimum experimental conditions, a linear calibration curve was obtained in the range of 1.0 × 10(-8) to 1 × 10(-5) mol L(-1) with R(2) = 0.9979. The limit of detection (LOD) and limit of quantitation (LOQ) were 1.3 × 10(-9) and 4.3 × 10(-9) mol L(-1), respectively. The relative standard deviation (RSD) for the determination of 1.0 × 10(-6) mol L(-1) quercetin was found to be 2.9% (n = 11) along with a sampling frequency of 40 h(-1). The applicability and reliability of the automated method described here had been applied to the determination of quercetin in human urine and red wine samples through recovery experiments, and the obtained results were in good agreement with those obtained by the HPLC method.
Optimization of the gypsum-based materials by the sequential simplex method
NASA Astrophysics Data System (ADS)
Doleželová, Magdalena; Vimmrová, Alena
2017-11-01
The application of the sequential simplex optimization method for the design of gypsum based materials is described. The principles of simplex method are explained and several examples of the method usage for the optimization of lightweight gypsum and ternary gypsum based materials are given. By this method lightweight gypsum based materials with desired properties and ternary gypsum based material with higher strength (16 MPa) were successfully developed. Simplex method is a useful tool for optimizing of gypsum based materials, but the objective of the optimization has to be formulated appropriately.
Sequential Reactions of Surface-Tethered Glycolytic Enzymes
Mukai, Chinatsu; Bergkvist, Magnus; Nelson, Jacquelyn L.; Travis, Alexander J.
2014-01-01
SUMMARY The development of complex hybrid organic-inorganic devices faces several challenges, including how they can generate energy. Cells face similar challenges regarding local energy production. Mammalian sperm solve this problem by generating ATP down the flagellar principal piece by means of glycolytic enzymes, several of which are tethered to a cytoskeletal support via germ cell-specific targeting domains. Inspired by this design, we have produced recombinant hexokinase type 1 and glucose-6-phosphate isomerase capable of oriented immobilization on a nickel-nitrilotriacetic acid modified surface. Specific activities of enzymes tethered via this strategy were substantially higher than when randomly adsorbed. Furthermore, these enzymes showed sequential activities when tethered onto the same surface. This is the first demonstration of surface-tethered pathway components showing sequential enzymatic activities, and it provides a first step toward reconstitution of glycolysis on engineered hybrid devices. PMID:19778729
Effect of metaphorical verbal instruction on modeling of sequential dance skills by young children.
Sawada, Misako; Mori, Shiro; Ishii, Motonobu
2002-12-01
Metaphorical verbal instruction was compared to specific verbal instruction about movement in the modeling of sequential dance skills by young children. Two groups of participants (Younger, mean age 5:3 yr., n = 30: Older, mean age 6:2 yr., n = 30) were randomly assigned to conditions in a 2 (sex) x 2 (age [Younger and Older]) x 3 (verbal instruction [Metaphorical, Movement-relevant, and None]) factorial design. Order scores were calculated for both performance and recognition tests, comprising five acquisition trials and two retention trials after 24 hr., respectively. Analysis of variance indicated that the group given metaphorical instruction performed better than the other two instructions for both younger and older children. The results suggest that metaphorical verbal instruction aids the recognition and performance of sequential dance skills in young children.
Sequential associative memory with nonuniformity of the layer sizes.
Teramae, Jun-Nosuke; Fukai, Tomoki
2007-01-01
Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-06-21
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well.
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-12
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Li, Ke; Ping, Xueliang; Wang, Huaqing; Chen, Peng; Cao, Yi
2013-01-01
A novel intelligent fault diagnosis method for motor roller bearings which operate under unsteady rotating speed and load is proposed in this paper. The pseudo Wigner-Ville distribution (PWVD) and the relative crossing information (RCI) methods are used for extracting the feature spectra from the non-stationary vibration signal measured for condition diagnosis. The RCI is used to automatically extract the feature spectrum from the time-frequency distribution of the vibration signal. The extracted feature spectrum is instantaneous, and not correlated with the rotation speed and load. By using the ant colony optimization (ACO) clustering algorithm, the synthesizing symptom parameters (SSP) for condition diagnosis are obtained. The experimental results shows that the diagnostic sensitivity of the SSP is higher than original symptom parameter (SP), and the SSP can sensitively reflect the characteristics of the feature spectrum for precise condition diagnosis. Finally, a fuzzy diagnosis method based on sequential inference and possibility theory is also proposed, by which the conditions of the machine can be identified sequentially as well. PMID:23793021
NASA Astrophysics Data System (ADS)
Yang, Yong; Christakos, George; Huang, Wei; Lin, Chengda; Fu, Peihong; Mei, Yang
2016-04-01
Because of the rapid economic growth in China, many regions are subjected to severe particulate matter pollution. Thus, improving the methods of determining the spatiotemporal distribution and uncertainty of air pollution can provide considerable benefits when developing risk assessments and environmental policies. The uncertainty assessment methods currently in use include the sequential indicator simulation (SIS) and indicator kriging techniques. However, these methods cannot be employed to assess multi-temporal data. In this work, a spatiotemporal sequential indicator simulation (STSIS) based on a non-separable spatiotemporal semivariogram model was used to assimilate multi-temporal data in the mapping and uncertainty assessment of PM2.5 distributions in a contaminated atmosphere. PM2.5 concentrations recorded throughout 2014 in Shandong Province, China were used as the experimental dataset. Based on the number of STSIS procedures, we assessed various types of mapping uncertainties, including single-location uncertainties over one day and multiple days and multi-location uncertainties over one day and multiple days. A comparison of the STSIS technique with the SIS technique indicate that a better performance was obtained with the STSIS method.
Metal Big Area Additive Manufacturing: Process Modeling and Validation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan; Nycz, Andrzej; Noakes, Mark W
Metal Big Area Additive Manufacturing (mBAAM) is a new additive manufacturing (AM) technology for printing large-scale 3D objects. mBAAM is based on the gas metal arc welding process and uses a continuous feed of welding wire to manufacture an object. An electric arc forms between the wire and the substrate, which melts the wire and deposits a bead of molten metal along the predetermined path. In general, the welding process parameters and local conditions determine the shape of the deposited bead. The sequence of the bead deposition and the corresponding thermal history of the manufactured object determine the long rangemore » effects, such as thermal-induced distortions and residual stresses. Therefore, the resulting performance or final properties of the manufactured object are dependent on its geometry and the deposition path, in addition to depending on the basic welding process parameters. Physical testing is critical for gaining the necessary knowledge for quality prints, but traversing the process parameter space in order to develop an optimized build strategy for each new design is impractical by pure experimental means. Computational modeling and optimization may accelerate development of a build process strategy and saves time and resources. Because computational modeling provides these opportunities, we have developed a physics-based Finite Element Method (FEM) simulation framework and numerical models to support the mBAAM process s development and design. In this paper, we performed a sequentially coupled heat transfer and stress analysis for predicting the final deformation of a small rectangular structure printed using the mild steel welding wire. Using the new simulation technologies, material was progressively added into the FEM simulation as the arc weld traversed the build path. In the sequentially coupled heat transfer and stress analysis, the heat transfer was performed to calculate the temperature evolution, which was used in a stress analysis to evaluate the residual stresses and distortions. In this formulation, we assume that physics is directionally coupled, i.e. the effect of stress of the component on the temperatures is negligible. The experiment instrumentation (measurement types, sensor types, sensor locations, sensor placements, measurement intervals) and the measurements are presented. The temperatures and distortions from the simulations show good correlation with experimental measurements. Ongoing modeling work is also briefly discussed.« less
Best Bang for the Buck: Part 1 – The Size of Experiments Relative to Design Performance
Anderson-Cook, Christine Michaela; Lu, Lu
2016-10-01
There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less
A novel method for the sequential removal and separation of multiple heavy metals from wastewater.
Fang, Li; Li, Liang; Qu, Zan; Xu, Haomiao; Xu, Jianfang; Yan, Naiqiang
2018-01-15
A novel method was developed and applied for the treatment of simulated wastewater containing multiple heavy metals. A sorbent of ZnS nanocrystals (NCs) was synthesized and showed extraordinary performance for the removal of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ . The removal efficiencies of Hg 2+ , Cu 2+ , Pb 2+ and Cd 2+ were 99.9%, 99.9%, 90.8% and 66.3%, respectively. Meanwhile, it was determined that solubility product (K sp ) of heavy metal sulfides was closely related to adsorption selectivity of various heavy metals on the sorbent. The removal efficiency of Hg 2+ was higher than that of Cd 2+ , while the K sp of HgS was lower than that of CdS. It indicated that preferential adsorption of heavy metals occurred when the K sp of the heavy metal sulfide was lower. In addition, the differences in the K sp of heavy metal sulfides allowed for the exchange of heavy metals, indicating the potential application for the sequential removal and separation of heavy metals from wastewater. According to the cumulative adsorption experimental results, multiple heavy metals were sequentially adsorbed and separated from the simulated wastewater in the order of the K sp of their sulfides. This method holds the promise of sequentially removing and separating multiple heavy metals from wastewater. Copyright © 2017 Elsevier B.V. All rights reserved.
Dinavahi, Saketh S; Noory, Mohammad A; Gowda, Raghavendra; Drabick, Joseph J; Berg, Arthur; Neves, Rogerio I; Robertson, Gavin P
2018-03-01
Drug combinations acting synergistically to kill cancer cells have become increasingly important in melanoma as an approach to manage the recurrent resistant disease. Protein kinase B (AKT) is a major target in this disease but its inhibitors are not effective clinically, which is a major concern. Targeting AKT in combination with WEE1 (mitotic inhibitor kinase) seems to have potential to make AKT-based therapeutics effective clinically. Since agents targeting AKT and WEE1 have been tested individually in the clinic, the quickest way to move the drug combination to patients would be to combine these agents sequentially, enabling the use of existing phase I clinical trial toxicity data. Therefore, a rapid preclinical approach is needed to evaluate whether simultaneous or sequential drug treatment has maximal therapeutic efficacy, which is based on a mechanistic rationale. To develop this approach, melanoma cell lines were treated with AKT inhibitor AZD5363 [4-amino- N -[(1 S )-1-(4-chlorophenyl)-3-hydroxypropyl]-1-(7 H -pyrrolo[2,3- d ]pyrimidin-4-yl)piperidine-4-carboxamide] and WEE1 inhibitor AZD1775 [2-allyl-1-(6-(2-hydroxypropan-2-yl)pyridin-2-yl)-6-((4-(4-methylpiperazin-1-yl)phenyl)amino)-1 H -pyrazolo[3,4- d ]pyrimidin-3(2 H )-one] using simultaneous and sequential dosing schedules. Simultaneous treatment synergistically reduced melanoma cell survival and tumor growth. In contrast, sequential treatment was antagonistic and had a minimal tumor inhibitory effect compared with individual agents. Mechanistically, simultaneous targeting of AKT and WEE1 enhanced deregulation of the cell cycle and DNA damage repair pathways by modulating transcription factors p53 and forkhead box M1, which was not observed with sequential treatment. Thus, this study identifies a rapid approach to assess the drug combinations with a mechanistic basis for selection, which suggests that combining AKT and WEE1 inhibitors is needed for maximal efficacy. Copyright © 2018 by The American Society for Pharmacology and Experimental Therapeutics.
Auxetic metamaterials from disordered networks
NASA Astrophysics Data System (ADS)
Reid, Daniel R.; Pashine, Nidhi; Wozniak, Justin M.; Jaeger, Heinrich M.; Liu, Andrea J.; Nagel, Sidney R.; de Pablo, Juan J.
2018-02-01
Recent theoretical work suggests that systematic pruning of disordered networks consisting of nodes connected by springs can lead to materials that exhibit a host of unusual mechanical properties. In particular, global properties such as Poisson’s ratio or local responses related to deformation can be precisely altered. Tunable mechanical responses would be useful in areas ranging from impact mitigation to robotics and, more generally, for creation of metamaterials with engineered properties. However, experimental attempts to create auxetic materials based on pruning-based theoretical ideas have not been successful. Here we introduce a more realistic model of the networks, which incorporates angle-bending forces and the appropriate experimental boundary conditions. A sequential pruning strategy of select bonds in this model is then devised and implemented that enables engineering of specific mechanical behaviors upon deformation, both in the linear and in the nonlinear regimes. In particular, it is shown that Poisson’s ratio can be tuned to arbitrary values. The model and concepts discussed here are validated by preparing physical realizations of the networks designed in this manner, which are produced by laser cutting 2D sheets and are found to behave as predicted. Furthermore, by relying on optimization algorithms, we exploit the networks’ susceptibility to tuning to design networks that possess a distribution of stiffer and more compliant bonds and whose auxetic behavior is even greater than that of homogeneous networks. Taken together, the findings reported here serve to establish that pruned networks represent a promising platform for the creation of unique mechanical metamaterials.
Tam, A M W; Qi, G; Srivastava, A K; Wang, X Q; Fan, F; Chigrinov, V G; Kwok, H S
2014-06-10
In this paper, we present a novel design configuration of double DHFLC wave plate continuous tunable Lyot filter, which exhibits a rapid response time of 185 μs, while the high-contrast ratio between the passband and stop band is maintained throughout a wide tunable range. A DHFLC tunable filter with a high-contrast ratio is attractive for realizing high-speed optical processing devices, such as multispectral and hyperspectral imaging systems, real-time remote sensing, field sequential color display, and wavelength demultiplexing in the metro network. In this work, an experimental prototype for a single-stage DHFLC Lyot filter of this design has been fabricated using photoalignment technology. We have demonstrated that the filter has a continuous tunable range of 30 nm for a blue wavelength, 45 nm for a green wavelength, and more than 50 nm for a red wavelength when the applied voltage gradually increases from 0 to 8 V. Within this tunable range, the contrast ratio of the proposed double wave plate configuration is maintained above 20 with small deviation in the transmittance level. Simulation and experimental results showed the proposed double DHFLC wave plate configuration enhances the contrast ratio of the tunable filter and, thus, increases the tunable range of the filter when compared with the Lyot filter using a single DHFLC wave plate. Moreover, we have proposed a polarization insensitive configuration for which the efficiency of the existing prototype can theoretically be doubled by the use of polarization beam splitters.
A protein-dependent side-chain rotamer library.
Bhuyan, Md Shariful Islam; Gao, Xin
2011-12-14
Protein side-chain packing problem has remained one of the key open problems in bioinformatics. The three main components of protein side-chain prediction methods are a rotamer library, an energy function and a search algorithm. Rotamer libraries summarize the existing knowledge of the experimentally determined structures quantitatively. Depending on how much contextual information is encoded, there are backbone-independent rotamer libraries and backbone-dependent rotamer libraries. Backbone-independent libraries only encode sequential information, whereas backbone-dependent libraries encode both sequential and locally structural information. However, side-chain conformations are determined by spatially local information, rather than sequentially local information. Since in the side-chain prediction problem, the backbone structure is given, spatially local information should ideally be encoded into the rotamer libraries. In this paper, we propose a new type of backbone-dependent rotamer library, which encodes structural information of all the spatially neighboring residues. We call it protein-dependent rotamer libraries. Given any rotamer library and a protein backbone structure, we first model the protein structure as a Markov random field. Then the marginal distributions are estimated by the inference algorithms, without doing global optimization or search. The rotamers from the given library are then re-ranked and associated with the updated probabilities. Experimental results demonstrate that the proposed protein-dependent libraries significantly outperform the widely used backbone-dependent libraries in terms of the side-chain prediction accuracy and the rotamer ranking ability. Furthermore, without global optimization/search, the side-chain prediction power of the protein-dependent library is still comparable to the global-search-based side-chain prediction methods.
Software For Drawing Design Details Concurrently
NASA Technical Reports Server (NTRS)
Crosby, Dewey C., III
1990-01-01
Software system containing five computer-aided-design programs enables more than one designer to work on same part or assembly at same time. Reduces time necessary to produce design by implementing concept of parallel or concurrent detailing, in which all detail drawings documenting three-dimensional model of part or assembly produced simultaneously, rather than sequentially. Keeps various detail drawings consistent with each other and with overall design by distributing changes in each detail to all other affected details.
NASA Astrophysics Data System (ADS)
Osonga, Francis Juma
Flavonoids exhibit arrays of biological effects that are beneficial to humans, including anti-viral, anti-oxidative, anti-inflammatory and anti-carcinogenic effects. However, these applications have been hindered by their poor stability and solubility in common solvents. Consequently, there is significant interest in the modification of flavonoids to improve their solubility. This poor solubility is also believed to be responsible for its permeability and bioavailability. Hence the central goal of this work is to design synthetic strategies for the sequential protection of the -OH groups in order to produce phosphorylated quercetin and apigenin derivatives. This work is divided into two parts: the first part presents the design, synthesis, and characterization of novel flavonoid derivatives via global and sequential phosphorylation. The second part focuses on the application of the synthesized derivatives for greener nanoparticle synthesis. This work shows for the first time that sequential phosphorylation of Quercetin is feasible through the design of 4 new derivatives namely: 5,4'-O-Quercetin Diphosphate (QDPI), 4'-O-phosphate Quercetin (4'-QPI), 5,4'-Quercetin Diphosphate (5,4'-QDP) and monophosphate 4-QP. The synthesis of 4'-QP and 5, 4'-QDP was successful with 85% and 60.5% yields respectively. In addition, the progress towards the total synthesis of apigenin phosphate derivatives (7, 4'-ADP and 7-AP) is presented. The synthesized derivatives were characterized using 1H, 13C, and 31P NMR. The phosphorylated derivatives were subsequently explored as reducing agents for sustainable synthesis of gold, silver and copper nanoparticles. We have successfully demonstrated the photochemical synthesis of gold nanoplates of sizes ranging from 10 - 200 nm using water soluble QDP in the presence of sunlight. This work contributes immensely in promoting the ideals of green nanosynthesis by (i) eliminating the use of organic solvents in the nanosynthesis, (ii) exploiting the naturally-derived flavonoids as reducing and stabilizing reagents without any other extraneous reagents, and (iii) achieving anisotropic nanosynthesis using sunlight and at room temperature.
NASA Technical Reports Server (NTRS)
Carroll, Chester C.; Youngblood, John N.; Saha, Aindam
1987-01-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processing elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carroll, C.C.; Youngblood, J.N.; Saha, A.
1987-12-01
Improvements and advances in the development of computer architecture now provide innovative technology for the recasting of traditional sequential solutions into high-performance, low-cost, parallel system to increase system performance. Research conducted in development of specialized computer architecture for the algorithmic execution of an avionics system, guidance and control problem in real time is described. A comprehensive treatment of both the hardware and software structures of a customized computer which performs real-time computation of guidance commands with updated estimates of target motion and time-to-go is presented. An optimal, real-time allocation algorithm was developed which maps the algorithmic tasks onto the processingmore » elements. This allocation is based on the critical path analysis. The final stage is the design and development of the hardware structures suitable for the efficient execution of the allocated task graph. The processing element is designed for rapid execution of the allocated tasks. Fault tolerance is a key feature of the overall architecture. Parallel numerical integration techniques, tasks definitions, and allocation algorithms are discussed. The parallel implementation is analytically verified and the experimental results are presented. The design of the data-driven computer architecture, customized for the execution of the particular algorithm, is discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela; Lu, Lu
There are many choices to make, when designing an experiment for a study, such as: what design factors to consider, which levels of the factors to use and which model to focus on. One aspect of design, however, is often left unquestioned: the size of the experiment. When learning about design of experiments, problems are often posed as "select a design for a particular objective with N runs." It’s tempting to consider the design size as a given constraint in the design-selection process. If you think of learning through designed experiments as a sequential process, however, strategically planning for themore » use of resources at different stages of data collection can be beneficial: Saving experimental runs for later is advantageous if you can efficiently learn with less in the early stages. Alternatively, if you’re too frugal in the early stages, you might not learn enough to proceed confidently with the next stages. Therefore, choosing the right-sized experiment is important—not too large or too small, but with a thoughtful balance to maximize the knowledge gained given the available resources. It can be a great advantage to think about the design size as flexible and include it as an aspect for comparisons. Sometimes you’re asked to provide a small design that is too ambitious for the goals of the study. Finally, if you can show quantitatively how the suggested design size might be inadequate or lead to problems during analysis—and also offer a formal comparison to some alternatives of different (likely larger) sizes—you may have a better chance to ask for additional resources to deliver statistically sound and satisfying results« less
Brühlmann, David; Sokolov, Michael; Butté, Alessandro; Sauer, Markus; Hemberger, Jürgen; Souquet, Jonathan; Broly, Hervé; Jordan, Martin
2017-07-01
Rational and high-throughput optimization of mammalian cell culture media has a great potential to modulate recombinant protein product quality. We present a process design method based on parallel design-of-experiment (DoE) of CHO fed-batch cultures in 96-deepwell plates to modulate monoclonal antibody (mAb) glycosylation using medium supplements. To reduce the risk of losing valuable information in an intricate joint screening, 17 compounds were separated into five different groups, considering their mode of biological action. The concentration ranges of the medium supplements were defined according to information encountered in the literature and in-house experience. The screening experiments produced wide glycosylation pattern ranges. Multivariate analysis including principal component analysis and decision trees was used to select the best performing glycosylation modulators. Subsequent D-optimal quadratic design with four factors (three promising compounds and temperature shift) in shake tubes confirmed the outcome of the selection process and provided a solid basis for sequential process development at a larger scale. The glycosylation profile with respect to the specifications for biosimilarity was greatly improved in shake tube experiments: 75% of the conditions were equally close or closer to the specifications for biosimilarity than the best 25% in 96-deepwell plates. Biotechnol. Bioeng. 2017;114: 1448-1458. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
A Study on the Spatial Abilities of Prospective Social Studies Teachers: A Mixed Method Research
ERIC Educational Resources Information Center
Yurt, Eyüp; Tünkler, Vural
2016-01-01
This study investigated prospective social studies teachers' spatial abilities. It was conducted with 234 prospective teachers attending Social Studies Teaching departments at Education Faculties of two universities in Central and Southern Anatolia. This study, designed according to the explanatory-sequential design, is a mixed research method,…
Child Welfare Strategy in the Coming Years.
ERIC Educational Resources Information Center
Kadushin, Alfred; And Others
This collection of policy papers by a dozen national experts in subject areas related to child welfare is designed to assist public and voluntary agency program directors in their efforts to update current programs or to design new ones. Sequentially the chapters: (1) set a framework for the following papers, (2) examine the provision of foster…
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
15 CFR 743.1 - Wassenaar Arrangement.
Code of Federal Regulations, 2011 CFR
2011-01-01
...' are defined as “focal plane arrays” designed for use with a scanning optical system that images a scene in a sequential manner to produce an image. 'Staring Arrays' are defined as “focal plane arrays” unfortunately designed for use with a non-scanning optical system that images a scene. h. Gallium Arsenide or...
NASA Astrophysics Data System (ADS)
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
ERIC Educational Resources Information Center
Igo, L. Brent; Kiewra, Kenneth A.; Bruning, Roger
2008-01-01
In this study, qualitative themes and quantitative findings from previous research were used to justify the exploration of four experimental, note-taking conditions and the impact of those conditions on student learning from Web-based text. However, puzzling results obtained from dependent measures of student learning were quite inconsistent with…
Memory Activation and the Availability of Explanations in Sequential Diagnostic Reasoning
ERIC Educational Resources Information Center
Mehlhorn, Katja; Taatgen, Niels A.; Lebiere, Christian; Krems, Josef F.
2011-01-01
In the field of diagnostic reasoning, it has been argued that memory activation can provide the reasoner with a subset of possible explanations from memory that are highly adaptive for the task at hand. However, few studies have experimentally tested this assumption. Even less empirical and theoretical work has investigated how newly incoming…
Research on parallel algorithm for sequential pattern mining
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao
2008-03-01
Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.
The Modeling, Simulation and Comparison of Interconnection Networks for Parallel Processing.
1987-12-01
performs better at a lower hardware cost than do the single stage cube and mesh networks. As a result, the designer of a paralll pro- cessing system is...attempted, and in most cases succeeded, in designing and implementing faster. more powerful systems. Due to design innovations and technological advances...largely to the computational complexity of the algorithms executed. In the von Neumann machine, instructions must be executed in a sequential manner. Design
Auyeung, S Freda; Long, Qi; Royster, Erica Bruce; Murthy, Smitha; McNutt, Marcia D; Lawson, David; Miller, Andrew; Manatunga, Amita; Musselman, Dominique L
2009-10-01
Interferon-alpha therapy, which is used to treat metastatic malignant melanoma, can cause patients to develop two distinct neurobehavioral symptom complexes: a mood syndrome and a neurovegetative syndrome. Interferon-alpha effects on serotonin metabolism appear to contribute to the mood and anxiety syndrome, while the neurovegetative syndrome appears to be related to interferon-alpha effects on dopamine. Our goal is to propose a design for utilizing a sequential, multiple assignment, randomized trial design for patients with malignant melanoma to test the relative efficacy of drugs that target serotonin versus dopamine metabolism during 4 weeks of intravenous, then 8 weeks of subcutaneous, interferon-alpha therapy. Patients will be offered participation in a double-blinded, randomized, controlled, 14-week trial involving two treatment phases. During the first month of intravenous interferon-alpha therapy, we will test the hypotheses that escitalopram will be more effective in reducing depressed mood, anxiety, and irritability, whereas methylphenidate will be more effective in diminishing interferon-alpha-induced neurovegetative symptoms, such as fatigue and psychomotor slowing. During the next 8 weeks of subcutaneous interferon therapy, participants whose symptoms do not improve significantly will be randomized to the alternate agent alone versus escitalopram and methylphenidate together. We present a prototype for a single-center, sequential, multiple assignment, randomized trial, which seeks to determine the efficacy of sequenced and targeted treatment for the two distinct symptom complexes suffered by patients treated with interferon-alpha. Because we cannot completely control for external factors, a relevant question is whether or not 'short-term' neuropsychiatric interventions can increase the number of interferon-alpha doses tolerated and improve long-term survival. This sequential, multiple assignment, randomized trial proposes a framework for developing optimal treatment strategies; however, additional studies are needed to determine the best strategy for treating or preventing neurobehavioral symptoms induced by the immunotherapy interferon-alpha.
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
Leveraging Hypoxia-Activated Prodrugs to Prevent Drug Resistance in Solid Tumors.
Lindsay, Danika; Garvey, Colleen M; Mumenthaler, Shannon M; Foo, Jasmine
2016-08-01
Experimental studies have shown that one key factor in driving the emergence of drug resistance in solid tumors is tumor hypoxia, which leads to the formation of localized environmental niches where drug-resistant cell populations can evolve and survive. Hypoxia-activated prodrugs (HAPs) are compounds designed to penetrate to hypoxic regions of a tumor and release cytotoxic or cytostatic agents; several of these HAPs are currently in clinical trial. However, preliminary results have not shown a survival benefit in several of these trials. We hypothesize that the efficacy of treatments involving these prodrugs depends heavily on identifying the correct treatment schedule, and that mathematical modeling can be used to help design potential therapeutic strategies combining HAPs with standard therapies to achieve long-term tumor control or eradication. We develop this framework in the specific context of EGFR-driven non-small cell lung cancer, which is commonly treated with the tyrosine kinase inhibitor erlotinib. We develop a stochastic mathematical model, parametrized using clinical and experimental data, to explore a spectrum of treatment regimens combining a HAP, evofosfamide, with erlotinib. We design combination toxicity constraint models and optimize treatment strategies over the space of tolerated schedules to identify specific combination schedules that lead to optimal tumor control. We find that (i) combining these therapies delays resistance longer than any monotherapy schedule with either evofosfamide or erlotinib alone, (ii) sequentially alternating single doses of each drug leads to minimal tumor burden and maximal reduction in probability of developing resistance, and (iii) strategies minimizing the length of time after an evofosfamide dose and before erlotinib confer further benefits in reduction of tumor burden. These results provide insights into how hypoxia-activated prodrugs may be used to enhance therapeutic effectiveness in the clinic.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Achieving integration in mixed methods designs-principles and practices.
Fetters, Michael D; Curry, Leslie A; Creswell, John W
2013-12-01
Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.
Shin, Seung-Hwa; Lee, Jangwook; Lim, Kwang Suk; Rhim, Taiyoun; Lee, Sang Kyung; Kim, Yong-Hee; Lee, Kuen Yong
2013-02-28
Ischemic disease is associated with high mortality and morbidity rates, and therapeutic angiogenesis via systemic or local delivery of protein drugs is one potential approach to treat the disease. In this study, we hypothesized that combined delivery of TAT-HSP27 (HSP27 fused with transcriptional activator) and VEGF could enhance the therapeutic efficacy in an ischemic mouse model, and that sequential release could be critical in therapeutic angiogenesis. Alginate hydrogels containing TAT-HSP27 as an anti-apoptotic agent were prepared, and porous PLGA microspheres loaded with VEGF as an angiogenic agent were incorporated into the hydrogels to prepare microsphere/hydrogel hybrid delivery systems. Sequential in vitro release of TAT-HSP27 and VEGF was achieved by the hybrid systems. TAT-HSP27 was depleted from alginate gels in 7 days, while VEGF was continually released for 28 days. The release rate of VEGF was attenuated by varying the porous structures of PLGA microspheres. Sequential delivery of TAT-HSP27 and VEGF was critical to protect against muscle degeneration and fibrosis, as well as to promote new blood vessel formation in the ischemic site of a mouse model. This approach to controlling the sequential release behaviors of multiple drugs could be useful in the design of novel drug delivery systems for therapeutic angiogenesis. Copyright © 2012 Elsevier B.V. All rights reserved.
Ale, Cesar E; Farías, Marta E; Strasser de Saad, Ana M; Pasteris, Sergio E
2014-07-01
Growth and fermentation patterns of Saccharomyces cerevisiae, Kloeckera apiculata, and Oenococcus oeni strains cultured in grape juice medium were studied. In pure, sequential and simultaneous cultures, the strains reached the stationary growth phase between 2 and 3 days. Pure and mixed K. apiculata and S. cerevisiae cultures used mainly glucose, producing ethanol, organic acids, and 4.0 and 0.1 mM glycerol, respectively. In sequential cultures, O. oeni achieved about 1 log unit at 3 days using mainly fructose and L-malic acid. Highest sugars consumption was detected in K. apiculata supernatants, lactic acid being the major end-product. 8.0 mM glycerol was found in 6-day culture supernatants. In simultaneous cultures, total sugars and L-malic acid were used at 3 days and 98% of ethanol and glycerol were detected. This study represents the first report of the population dynamics and metabolic behavior of yeasts and O. oeni in sequential and simultaneous cultures and contributes to the selection of indigenous strains to design starter cultures for winemaking, also considering the inclusion of K. apiculata. The sequential inoculation of yeasts and O. oeni would enhance glycerol production, which confers desirable organoleptic characteristics to wines, while organic acids levels would not affect their sensory profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Momota, Yutaka; Shimada, Kenichiro; Gin, Azusa; Matsubara, Takako; Azakami, Daigo; Ishioka, Katsumi; Nakamura, Yuka; Sako, Toshinori
2016-10-01
A closed chamber evaporimeter is suitable for measuring transepidermal water loss (TEWL) in cats because of the compact device size, tolerance to sudden movement and short measuring time. TEWL is a representative parameter for skin barrier dysfunction, which is one of the clinical signs of atopic dermatitis in humans and dogs. Measurement of feline TEWL has been reported, but applicability of this parameter has not been validated. The aims of this study were to determine if tape stripping is a valid experimental model in cats for studying TEWL and to determine if a closed chambered system is a suitable measurement tool for cats. Ten clinically normal cats. In order to evaluate variation of the measured values, TEWL was measured at the right and left side of the three clipped regions (axillae, lateral thigh and groin). Subsequently, TEWL was measured using sequential tape stripping of the stratum corneum as a model of acute barrier disruption. The variations between both sides of the three regions showed no significant difference. Sequential tape stripping was associated with increasing values for TEWL. Feline TEWL was shown to reflect changes in the skin barrier in an experimental model using a closed chamber system and has the potential for evaluating skin barrier function in cats with skin diseases. © 2016 ESVD and ACVD.
Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.
2015-01-01
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888
The potential application of the blackboard model of problem solving to multidisciplinary design
NASA Technical Reports Server (NTRS)
Rogers, James L.
1989-01-01
The potential application of the blackboard model of problem solving to multidisciplinary design is discussed. Multidisciplinary design problems are complex, poorly structured, and lack a predetermined decision path from the initial starting point to the final solution. The final solution is achieved using data from different engineering disciplines. Ideally, for the final solution to be the optimum solution, there must be a significant amount of communication among the different disciplines plus intradisciplinary and interdisciplinary optimization. In reality, this is not what happens in today's sequential approach to multidisciplinary design. Therefore it is highly unlikely that the final solution is the true optimum solution from an interdisciplinary optimization standpoint. A multilevel decomposition approach is suggested as a technique to overcome the problems associated with the sequential approach, but no tool currently exists with which to fully implement this technique. A system based on the blackboard model of problem solving appears to be an ideal tool for implementing this technique because it offers an incremental problem solving approach that requires no a priori determined reasoning path. Thus it has the potential of finding a more optimum solution for the multidisciplinary design problems found in today's aerospace industries.
Mechanistic studies on a sequential PDT protocol
NASA Astrophysics Data System (ADS)
Kessel, David
2016-03-01
A low (~LD15) PDT dose resulting in selective lysosomal photodamage can markedly promote photokilling by subsequent photodamage targeted to mitochondria. Experimental data are consistent with the proposal that cleavage of the autophagyassociated protein ATG5 to a pro-apoptotic fragment is responsible for this effect. This process is known to be dependent on the proteolytic activity of calpain. We have proposed that Ca2+ released from photodamaged lysosomes is the trigger for ATG5 cleavage. We can now document the conversion of ATG5 to the truncated form after lysosomal photodamage. Photofrin, a photosensitizer that targets both mitochondria and lysosomes, can be used for either phase of the sequential PDT process. The ability of Photofrin to target both loci may explain the well-documented efficacy of this agent.
NASA Astrophysics Data System (ADS)
Alam, Rabeka; Zylstra, Joshua; Fontaine, Danielle M.; Branchini, Bruce R.; Maye, Mathew M.
2013-05-01
Sequential bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET) from firefly luciferase to red fluorescent proteins using quantum dot or rod acceptor/donor linkers is described. The effect of morphology and tuned optical properties on the efficiency of this unique BRET-FRET system was evaluated.Sequential bioluminescence resonance energy transfer (BRET) and fluorescence resonance energy transfer (FRET) from firefly luciferase to red fluorescent proteins using quantum dot or rod acceptor/donor linkers is described. The effect of morphology and tuned optical properties on the efficiency of this unique BRET-FRET system was evaluated. Electronic supplementary information (ESI) available: Experimental details, Fig. S1 and Table S1-S4. See DOI: 10.1039/c3nr01842c
Ashengroph, Morahem; Nahvi, Iraj; Amini, Jahanshir
2013-01-01
For all industrial processes, modelling, optimisation and control are the keys to enhance productivity and ensure product quality. In the current study, the optimization of process parameters for improving the conversion of isoeugenol to vanillin by Psychrobacter sp. CSW4 was investigated by means of Taguchi approach and Box-Behnken statistical design under resting cell conditions. Taguchi design was employed for screening the significant variables in the bioconversion medium. Sequentially, Box-Behnken design experiments under Response Surface Methodology (RSM) was used for further optimization. Four factors (isoeugenol, NaCl, biomass and tween 80 initial concentrations), which have significant effects on vanillin yield, were selected from ten variables by Taguchi experimental design. With the regression coefficient analysis in the Box-Behnken design, a relationship between vanillin production and four significant variables was obtained, and the optimum levels of the four variables were as follows: initial isoeugenol concentration 6.5 g/L, initial tween 80 concentration 0.89 g/L, initial NaCl concentration 113.2 g/L and initial biomass concentration 6.27 g/L. Under these optimized conditions, the maximum predicted concentration of vanillin was 2.25 g/L. These optimized values of the factors were validated in a triplicate shaking flask study and an average of 2.19 g/L for vanillin, which corresponded to a molar yield 36.3%, after a 24 h bioconversion was obtained. The present work is the first one reporting the application of Taguchi design and Response surface methodology for optimizing bioconversion of isoeugenol into vanillin under resting cell conditions.
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, S; Lu, WG; Chen, YP
2015-03-11
A unique strategy, sequential linker installation (SLI), has been developed to construct multivariate MOFs with functional groups precisely positioned. PCN-700, a Zr-MOF with eight-connected Zr6O4(OH)(8)(H2O)(4) clusters, has been judiciously designed; the Zr-6 clusters in this MOF are arranged in such a fashion that, by replacement of terminal OH-/H2O ligands, subsequent insertion of linear dicarboxylate linkers is achieved. We demonstrate that linkers with distinct lengths and functionalities can be sequentially installed into PCN-700. Single-crystal to single-crystal transformation is realized so that the positions of the subsequently installed linkers are pinpointed via single-crystal X-ray diffraction analyses. This methodology provides a powerful toolmore » to construct multivariate MOFs with precisely positioned functionalities in the desired proximity, which would otherwise be difficult to achieve.« less
Xu, Yan; Wu, Qian; Shimatani, Yuji; Yamaguchi, Koji
2015-10-07
Due to the lack of regeneration methods, the reusability of nanofluidic chips is a significant technical challenge impeding the efficient and economic promotion of both fundamental research and practical applications on nanofluidics. Herein, a simple method for the total regeneration of glass nanofluidic chips was described. The method consists of sequential thermal treatment with six well-designed steps, which correspond to four sequential thermal and thermochemical decomposition processes, namely, dehydration, high-temperature redox chemical reaction, high-temperature gasification, and cooling. The method enabled the total regeneration of typical 'dead' glass nanofluidic chips by eliminating physically clogged nanoparticles in the nanochannels, removing chemically reacted organic matter on the glass surface and regenerating permanent functional surfaces of dissimilar materials localized in the nanochannels. The method provides a technical solution to significantly improve the reusability of glass nanofluidic chips and will be useful for the promotion and acceleration of research and applications on nanofluidics.
Sequential Injection Analysis for Optimization of Molecular Biology Reactions
Allen, Peter B.; Ellington, Andrew D.
2011-01-01
In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoupin, Stanislav, E-mail: sstoupin@aps.anl.gov; Shvyd’ko, Yuri; Trakhtenberg, Emil
2016-07-27
We report progress on implementation and commissioning of sequential X-ray diffraction topography at 1-BM Optics Testing Beamline of the Advanced Photon Source to accommodate growing needs of strain characterization in diffractive crystal optics and other semiconductor single crystals. The setup enables evaluation of strain in single crystals in the nearly-nondispersive double-crystal geometry. Si asymmetric collimator crystals of different crystallographic orientations were designed, fabricated and characterized using in-house capabilities. Imaging the exit beam using digital area detectors permits rapid sequential acquisition of X-ray topographs at different angular positions on the rocking curve of a crystal under investigation. Results on sensitivity andmore » spatial resolution are reported based on experiments with high-quality Si and diamond crystals. The new setup complements laboratory-based X-ray topography capabilities of the Optics group at the Advanced Photon Source.« less
ERIC Educational Resources Information Center
North Carolina State Dept. of Public Instruction, Raleigh. Div. of Vocational Education.
This guide is intended for use in teaching a course in the sequential tasks that change a designer's idea into a completed product. Emphasis is placed on the design of a product and the manufacturing system needed to produce it. The first two sections discuss the guide's development within the framework of North Carolina's efforts to improve…
The Effect of Portfolio Assessments on Metacognitive Skills and on Attitudes toward a Course
ERIC Educational Resources Information Center
Gencel, Ilke Evin
2017-01-01
The aim of this study is to determine through teacher candidates' thoughts the effects of a portfolio assessment implementation on their metacognitive skills and attitudes towards a course on measurement and evaluation. Exploratory sequential mixed-methods design is employed within the study. The pretest/posttest control group design was used in…
Unorganized Cognitive Structures of Illiterate as the Key Factor in Rural E-Learning Design
ERIC Educational Resources Information Center
Katre, Dinesh S.
2006-01-01
Cognitive Structures and Linguistic Sequential Memory or Memory of Serial Order are not very well developed among illiterate people contrary to educated people. It affects the comprehension of abstract ideas and the usability of the system. Therefore the cognitive limitations of illiterate must be considered for instructional design and user…
Making Sense of Phenomena from Sequential Images versus Illustrated Text
ERIC Educational Resources Information Center
Scalco, Karina C.; Talanquer, Vicente; Kiill, Keila B.; Cordeiro, Marcia R.
2018-01-01
We present the results of a qualitative research study designed to explore differences in the types of reasoning triggered by information presented to chemistry students in two different formats. One group of students was asked to analyze a sequence of images designed to represent critical elements in the explanation of a target phenomenon.…
ERIC Educational Resources Information Center
Lim, Janine M.
2016-01-01
A course design question for self-paced courses includes whether or not technological measures should be used in course design to force students to follow the sequence intended by the course author. This study examined learner behavior to understand whether the sequence of student assignment submissions in a self-paced distance course is related…
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
Interprofessional mental health training in rural primary care: findings from a mixed methods study.
Heath, Olga; Church, Elizabeth; Curran, Vernon; Hollett, Ann; Cornish, Peter; Callanan, Terrence; Bethune, Cheri; Younghusband, Lynda
2015-05-01
The benefits of interprofessional care in providing mental health services have been widely recognized, particularly in rural communities where access to health services is limited. There continues to be a need for more continuing interprofessional education in mental health intervention in rural areas. There have been few reports of rural programs in which mental health content has been combined with training in collaborative practice. The current study used a sequential mixed-method and quasi-experimental design to evaluate the impact of an interprofessional, intersectoral education program designed to enhance collaborative mental health capacity in six rural sites. Quantitative results reveal a significant increase in positive attitudes toward interprofessional mental health care teams and self-reported increases in knowledge and understanding about collaborative mental health care delivery. The analysis of qualitative data collected following completion of the program, reinforced the value of teaching mental health content within the context of collaborative practice and revealed practice changes, including more interprofessional and intersectoral collaboration. This study suggests that imbedding explicit training in collaborative care in content focused continuing professional education for more complex and chronic health issues may increase the likelihood that professionals will work together to effectively meet client needs.
Biomechanics of fencing sport: A scoping review
Chen, Tony Lin-Wei; Wong, Duo Wai-Chi; Wang, Yan; Ren, Sicong; Yan, Fei
2017-01-01
Objectives The aim of our scoping review was to identify and summarize current evidence on the biomechanics of fencing to inform athlete development and injury prevention. Design Scoping review. Method Peer-reviewed research was identified from electronic databases using a structured keyword search. Details regarding experimental design, study group characteristics and measured outcomes were extracted from retrieved studies, summarized and information regrouped under themes for analysis. The methodological quality of the evidence was evaluated. Results Thirty-seven peer-reviewed studies were retrieved, the majority being observational studies conducted with experienced and elite athletes. The methodological quality of the evidence was “fair” due to the limited scope of research. Male fencers were the prevalent group studied, with the lunge and use of a foil weapon being the principal movement evaluated. Motion capture and pedabarography were the most frequently used data collection techniques. Conclusions Elite fencers exhibited sequential coordination of upper and lower limb movements with coherent patterns of muscle activation, compared to novice fencers. These elite features of neuromuscular coordination resulted in higher magnitudes of forward linear velocity of the body center of mass and weapon. Training should focus on explosive power. Sex- and equipment-specific effects could not be evaluated based on available research. PMID:28187164
From SOPs to Reports to Evaluations: Learning and Memory ...
In an era of global trade and regulatory cooperation, consistent and scientifically based interpretation of developmental neurotoxicity (DNT) studies is essential. Because there is flexibility in the selection of test method(s), consistency can be especially challenging for learning and memory tests required by EPA and OECD DNT guidelines (chemicals and pesticides) and recommended for ICH prenatal/postnatal guidelines (pharmaceuticals). A well reasoned uniform approach is particularly important for variable endpoints and if non-standard tests are used. An understanding of the purpose behind the tests and expected outcomes is critical, and attention to elements of experimental design, conduct, and reporting can improve study design by the investigator as well as accuracy and consistency of interpretation by evaluators. This understanding also directs which information must be clearly described in study reports. While missing information may be available in standardized operating procedures (SOPs), if not clearly reflected in report submissions there may be questions and misunderstandings by evaluators which could impact risk assessments. A practical example will be presented to provide insights into important variables and reporting approaches. Cognitive functions most often tested in guidelines studies include associative, positional, sequential, and spatial learning and memory in weanling and adult animals. These complex behaviors tap different bra
NASA Technical Reports Server (NTRS)
Jones, Erick C.; Richards, Casey; Herstein, Kelli; Franca, Rodrigo; Yagoda, Evan L.; Vasquez, Reuben
2008-01-01
Current inventory management techniques for consumables and supplies aboard space vehicles are burdensome and time consuming. Inventory of food, clothing, and supplies are taken periodically by manually scanning the barcodes on each item. The inaccuracy of reading barcodes and the excessive amount of time it takes for the astronauts to perform this function would be better spent doing scientific experiments. Therefore, there is a need for an alternative method of inventory control by NASA astronauts. Radio Frequency Identification (RFID) is an automatic data capture technology that has potential to create a more effective and user-friendly inventory management system (IMS). In this paper we introduce a Design for Six Sigma Research (DFSS-R) methodology that allows for reliability testing of RFID systems. The research methodology uses a modified sequential design of experiments process to test and evaluate the quality of commercially available RFID technology. The results from the experimentation are compared to the requirements provided by NASA to evaluate the feasibility of using passive Generation 2 RFID technology to improve inventory control aboard crew exploration vehicles.
High performance genetic algorithm for VLSI circuit partitioning
NASA Astrophysics Data System (ADS)
Dinu, Simona
2016-12-01
Partitioning is one of the biggest challenges in computer-aided design for VLSI circuits (very large-scale integrated circuits). This work address the min-cut balanced circuit partitioning problem- dividing the graph that models the circuit into almost equal sized k sub-graphs while minimizing the number of edges cut i.e. minimizing the number of edges connecting the sub-graphs. The problem may be formulated as a combinatorial optimization problem. Experimental studies in the literature have shown the problem to be NP-hard and thus it is important to design an efficient heuristic algorithm to solve it. The approach proposed in this study is a parallel implementation of a genetic algorithm, namely an island model. The information exchange between the evolving subpopulations is modeled using a fuzzy controller, which determines an optimal balance between exploration and exploitation of the solution space. The results of simulations show that the proposed algorithm outperforms the standard sequential genetic algorithm both in terms of solution quality and convergence speed. As a direction for future study, this research can be further extended to incorporate local search operators which should include problem-specific knowledge. In addition, the adaptive configuration of mutation and crossover rates is another guidance for future research.
Neben, Nicole; Lenarz, Thomas; Schuessler, Mark; Harpel, Theo; Buechner, Andreas
2013-05-01
Results for speech recognition in noise tests when using a new research coding strategy designed to introduce the virtual channel effect provided no advantage over MP3(000™). Although statistically significant smaller just noticeable differences (JNDs) were obtained, the findings for pitch ranking proved to have little clinical impact. The aim of this study was to explore whether modifications to MP3000 by including sequential virtual channel stimulation would lead to further improvements in hearing, particularly for speech recognition in background noise and in competing-talker conditions, and to compare results for pitch perception and melody recognition, as well as informally collect subjective impressions on strategy preference. Nine experienced cochlear implant subjects were recruited for the prospective study. Two variants of the experimental strategy were compared to MP3000. The study design was a single-blinded ABCCBA cross-over trial paradigm with 3 weeks of take-home experience for each user condition. Comparing results of pitch-ranking, a significantly reduced JND was identified. No significant effect of coding strategy on speech understanding in noise or competing-talker materials was found. Melody recognition skills were the same under all user conditions.
Design and model for the giant magnetostrictive actuator used on an electronic controlled injector
NASA Astrophysics Data System (ADS)
Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Ben; Rong, Ce
2017-05-01
Giant magnetostrictive actuator (GMA) may be a promising candidate actuator to drive an electronic controlled injector as giant magnetostrictive material (GMM) has excellent performances as large output, fast response and high operating stability etc. To meet the driving requirement of the injector, the GMA should produce maximal shortening displacement when energized. An unbiased GMA with a ‘T’ shaped output rod is designed to reach the target. Furthermore, an open-hold-fall type driving voltage is exerted on the actuator coil to accelerate the response speed of the coil current. The actuator displacement is modeled from establishing the sub-models of coil current, magnetic field within GMM rod, magnetization and magnetostrictive strain sequentially. Two modifications are done to make the model more accurate. Firstly, consider the model fails to compute the transient-state response precisely, a dead-zone and delay links are embedded into the coil current sub-model. Secondly, as the magnetization and magnetostrictive strain sub-models just influence the change rule of the transient-state response the linear magnetostrictive strain-magnetic field sub-model is introduced. From experimental results, the modified model with linear magnetostrictive stain expression can predict the actuator displacement quite effectively.
Evaluation of an antibiotic intravenous to oral sequential therapy program.
Pablos, Ana I; Escobar, Ismael; Albiñana, Sandra; Serrano, Olga; Ferrari, José M; Herreros de Tejada, Alberto
2005-01-01
This study was designed to analyse the drug consumption difference and economic impact of an antibiotic sequential therapy focused on quinolones. We studied the consumption of quinolones (ofloxacin/levofloxacin and ciprofloxacin) 6 months before and after the implementation of a sequential therapy program in hospitalised patients. It was calculated for each antibiotic, in its oral and intravenous forms, in defined daily dose (DDD/100 stays per day) and economical terms (drug acquisition cost). At the beginning of the program ofloxacin was replaced by levofloxacin and, since their clinical uses are similar, the consumption of both drugs was compared during the period. In economic terms, the consumption of intravenous quinolones decreased 60% whereas the consumption of oral quinolones increased 66%. In DDD/100 stays per day, intravenous forms consumption decreased 53% and oral forms consumption increased 36%. Focusing on quinolones, the implementation of a sequential therapy program based on promoting an early switch from intravenous to oral regimen has proved its capacity to alter the utilisation profile of these antibiotics. The program has permitted the hospital a global saving of 41420 dollars for these drugs during the period of time considered. Copyright (c) 2004 John Wiley & Sons, Ltd.
Pixa, Nils H.; Steinberg, Fabian; Doppelmayr, Michael
2017-01-01
Many daily activities, such as tying one’s shoe laces, opening a jar of jam or performing a free throw in basketball, require the skillful coordinated use of both hands. Even though the non-invasive method of transcranial direct current stimulation (tDCS) has been repeatedly shown to improve unimanual motor performance, little is known about its effects on bimanual motor performance. More knowledge about how tDCS may improve bimanual behavior would be relevant to motor recovery, e.g., in persons with bilateral impairment of hand function. We therefore examined the impact of high-definition anodal tDCS (HD-atDCS) on the performance of a bimanual sequential sensorimotor task. Thirty-two volunteers (age M = 24.25; SD = 2.75; 14 females) participated in this double-blind study and performed sport stacking in six experimental sessions. In sport stacking, 12 specially designed cups must be stacked (stacked up) and dismantled (stacked down) in predefined patterns as fast as possible. During a pretest, posttest and follow-up test, two sport stacking formations (3-6-3 stack and 1-10-1 stack) were performed. Between the pretest and posttest, all participants were trained in sport stacking with concurrent brain stimulation for three consecutive days. The experimental group (STIM-M1) received HD-atDCS over both primary motor cortices (M1), while the control group received a sham stimulation (SHAM). Three-way analysis of variance (ANOVA) revealed a significant main effect of TIME and a significant interaction of TIME × GROUP. No significant effects were found for GROUP, nor for the three-way interaction of TIME × GROUP × FORMATION. Further two-way ANOVAs showed a significant main effect of TIME and a non-significant main effect for GROUP in both sport stacking formations. A significant interaction between TIME × GROUP was found only for the 3-6-3 formation, indicating superior performance gains for the experimental group (STIM-M1). To account and control for baseline influences on the outcome measurements, ANCOVAs treating pretest scores as covariates revealed a significant effect of the stimulation. From this, we conclude that bilateral HD-atDCS over both M1 improves motor performance in a bimanual sequential sensorimotor task. These results may indicate a beneficial use of tDCS for learning and recovery of bimanual motor skills. PMID:28747875
Aerts, Sam; Deschrijver, Dirk; Joseph, Wout; Verloock, Leen; Goeminne, Francis; Martens, Luc; Dhaene, Tom
2013-05-01
Human exposure to background radiofrequency electromagnetic fields (RF-EMF) has been increasing with the introduction of new technologies. There is a definite need for the quantification of RF-EMF exposure but a robust exposure assessment is not yet possible, mainly due to the lack of a fast and efficient measurement procedure. In this article, a new procedure is proposed for accurately mapping the exposure to base station radiation in an outdoor environment based on surrogate modeling and sequential design, an entirely new approach in the domain of dosimetry for human RF exposure. We tested our procedure in an urban area of about 0.04 km(2) for Global System for Mobile Communications (GSM) technology at 900 MHz (GSM900) using a personal exposimeter. Fifty measurement locations were sufficient to obtain a coarse street exposure map, locating regions of high and low exposure; 70 measurement locations were sufficient to characterize the electric field distribution in the area and build an accurate predictive interpolation model. Hence, accurate GSM900 downlink outdoor exposure maps (for use in, e.g., governmental risk communication and epidemiological studies) are developed by combining the proven efficiency of sequential design with the speed of exposimeter measurements and their ease of handling. Copyright © 2013 Wiley Periodicals, Inc.
Chen, Huachao; Wang, Yurong; Yao, Yongrong; Qiao, Shenglin; Wang, Hao; Tan, Ninghua
2017-01-01
A programmed drug delivery system that can achieve sequential release of multiple therapeutics under different stimulus holds great promise to enhance the treatment efficacy and overcome multi-drug resistance (MDR) in tumor. Herein, multi-organelle-targeted and pH/ cytochrome c (Cyt c) dual-responsive nanoparticles were designed for combination therapy on resistant tumor. In this system (designated DGLipo NPs), doxorubicin (Dox) was intercalated into the DNA duplex containing a Cyt c aptamer, which subsequently loaded in the dendrigraftpoly-L-lysines (DGL) cores of DGLipo NPs, while cyclopeptide RA-V was doped into the pH-sensitive liposomal shells. After dual modification with c(RGDfK) and mitochondria-penetrating peptide (MPP), DGLipo NPs could successively deliver the two drugs into lysosome and mitochondria of cancer cells, and achieve sequential drug release in virtue of the unique characteristic of these two organelles. The organelle-specific and spatiotemporally controlled release of Dox and RA-V led to enhanced therapeutic outcomes in MDR tumor. More significantly, the DGLipo NPs were successfully applied to monitor Cyt c release during mitochondria-mediated apoptotic process. This work represents a versatile strategy for precise combination therapy against resistant tumor with spatiotemporal control, and provides a potential tool for Cyt c-related apoptotic studies. PMID:29109776
ERIC Educational Resources Information Center
Lorber, Fred; Feifer, Irwin
Although Neighborhood Youth Corps (NYC) training is conducted either in NYC centers, governmental and non-profit agencies or private industry, there is no commitment for employment after training. The Mobilization for Youth-Experimental Manpower Laboratory (MFY-EML) is exploring the feasibility of linking NYC to other government manpower training…
NASA Astrophysics Data System (ADS)
Cherkashin, N.; Daghbouj, N.; Seine, G.; Claverie, A.
2018-04-01
Sequential He++H+ ion implantation, being more effective than the sole implantation of H+ or He+, is used by many to transfer thin layers of silicon onto different substrates. However, due to the poor understanding of the basic mechanisms involved in such a process, the implantation parameters to be used for the efficient delamination of a superficial layer are still subject to debate. In this work, by using various experimental techniques, we have studied the influence of the He and H relative depth-distributions imposed by the ion energies onto the result of the sequential implantation and annealing of the same fluence of He and H ions. Analyzing the characteristics of the blister populations observed after annealing and deducing the composition of the gas they contain from FEM simulations, we show that the trapping efficiency of He atoms in platelets and blisters during annealing depends on the behavior of the vacancies generated by the two implants within the H-rich region before and after annealing. Maximum efficiency of the sequential ion implantation is obtained when the H-rich region is able to trap all implanted He ions, while the vacancies it generated are not available to favor the formation of V-rich complexes after implantation then He-filled nano-bubbles after annealing. A technological option is to implant He+ ions first at such an energy that the damage it generates is located on the deeper side of the H profile.
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
The Doctrine of Original Antigenic Sin: Separating Good From Evil.
Monto, Arnold S; Malosh, Ryan E; Petrie, Joshua G; Martin, Emily T
2017-06-15
The term "original antigenic sin" was coined approximately 60 years ago to describe the imprinting by the initial first influenza A virus infection on the antibody response to subsequent vaccination. These studies did not suggest a reduction in the response to current antigens but instead suggested anamnestic recall of antibody to earlier influenza virus strains. Then, approximately 40 years ago, it was observed that sequential influenza vaccination might lead to reduced vaccine effectiveness (VE). This conclusion was largely dismissed after an experimental study involving sequential administration of then-standard influenza vaccines. Recent observations have provided convincing evidence that reduced VE after sequential influenza vaccination is a real phenomenon. We propose that such reduction in VE be termed "negative antigenic interaction," given that there is no age cohort effect. In contrast, the potentially positive protective effect of early influenza virus infection later in life continues to be observed. It is essential that we understand better the immunologic factors underlying both original antigenic sin and negative antigenic interaction, to support development of improved influenza vaccines and vaccination strategies. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
Sequential Objective Structured Clinical Examination based on item response theory in Iran.
Hejri, Sara Mortaz; Jalili, Mohammad
2017-01-01
In a sequential objective structured clinical examination (OSCE), all students initially take a short screening OSCE. Examinees who pass are excused from further testing, but an additional OSCE is administered to the remaining examinees. Previous investigations of sequential OSCE were based on classical test theory. We aimed to design and evaluate screening OSCEs based on item response theory (IRT). We carried out a retrospective observational study. At each station of a 10-station OSCE, the students' performance was graded on a Likert-type scale. Since the data were polytomous, the difficulty parameters, discrimination parameters, and students' ability were calculated using a graded response model. To design several screening OSCEs, we identified the 5 most difficult stations and the 5 most discriminative ones. For each test, 5, 4, or 3 stations were selected. Normal and stringent cut-scores were defined for each test. We compared the results of each of the 12 screening OSCEs to the main OSCE and calculated the positive and negative predictive values (PPV and NPV), as well as the exam cost. A total of 253 students (95.1%) passed the main OSCE, while 72.6% to 94.4% of examinees passed the screening tests. The PPV values ranged from 0.98 to 1.00, and the NPV values ranged from 0.18 to 0.59. Two tests effectively predicted the results of the main exam, resulting in financial savings of 34% to 40%. If stations with the highest IRT-based discrimination values and stringent cut-scores are utilized in the screening test, sequential OSCE can be an efficient and convenient way to conduct an OSCE.
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.
Optimality, sample size, and power calculations for the sequential parallel comparison design.
Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A
2011-10-15
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.
Design and evaluation of a hybrid storage system in HEP environment
NASA Astrophysics Data System (ADS)
Xu, Qi; Cheng, Yaodong; Chen, Gang
2017-10-01
Nowadays, the High Energy Physics experiments produce a large amount of data. These data are stored in mass storage systems which need to balance the cost, performance and manageability. In this paper, a hybrid storage system including SSDs (Solid-state Drive) and HDDs (Hard Disk Drive) is designed to accelerate data analysis and maintain a low cost. The performance of accessing files is a decisive factor for the HEP computing system. A new deployment model of Hybrid Storage System in High Energy Physics is proposed which is proved to have higher I/O performance. The detailed evaluation methods and the evaluations about SSD/HDD ratio, and the size of the logic block are also given. In all evaluations, sequential-read, sequential-write, random-read and random-write are all tested to get the comprehensive results. The results show the Hybrid Storage System has good performance in some fields such as accessing big files in HEP.
Advanced Turbo-Charging Research and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2008-02-27
The objective of this project is to conduct analysis, design, procurement and test of a high pressure ratio, wide flow range, and high EGR system with two stages of turbocharging. The system needs to meet the stringent 2010MY emissions regulations at 20% + better fuel economy than its nearest gasoline competitor while allowing equivalent vehicle launch characteristics and higher torque capability than its nearest gasoline competitor. The system will also need to meet light truck/ SUV life requirements, which will require validation or development of components traditionally used only in passenger car applications. The conceived system is termed 'seriessequential turbocharger'more » because the turbocharger system operates in series at appropriate times and also sequentially when required. This is accomplished using intelligent design and control of flow passages and valves. Components of the seriessequential system will also be applicable to parallel-sequential systems which are also expected to be in use for future light truck/SUV applications.« less
Automatic exposure control for space sequential camera
NASA Technical Reports Server (NTRS)
Mcatee, G. E., Jr.; Stoap, L. J.; Solheim, C. D.; Sharpsteen, J. T.
1975-01-01
The final report for the automatic exposure control study for space sequential cameras, for the NASA Johnson Space Center is presented. The material is shown in the same sequence that the work was performed. The purpose of the automatic exposure control is to automatically control the lens iris as well as the camera shutter so that the subject is properly exposed on the film. A study of design approaches is presented. Analysis of the light range of the spectrum covered indicates that the practical range would be from approximately 20 to 6,000 foot-lamberts, or about nine f-stops. Observation of film available from space flights shows that optimum scene illumination is apparently not present in vehicle interior photography as well as in vehicle-to-vehicle situations. The evaluation test procedure for a breadboard, and the results, which provided information for the design of a brassboard are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favorito, Jessica E.; Luxton, Todd P.; Eick, Matthew J.
Selenium is a trace element found in western US soils, where ingestion of Se-accumulating plants has resulted in livestock fatalities. Therefore, a reliable understanding of Se speciation and bioavailability is critical for effective mitigation. Sequential extraction procedures (SEP) are often employed to examine Se phases and speciation in contaminated soils but may be limited by experimental conditions. We examined the validity of a SEP using X-ray absorption spectroscopy (XAS) for both whole and a sequence of extracted soils. The sequence included removal of soluble, PO4-extractable, carbonate, amorphous Fe-oxide, crystalline Fe-oxide, organic, and residual Se forms. For whole soils, XANES analysesmore » indicated Se(0) and Se(-II) predominated, with lower amounts of Se(IV) present, related to carbonates and Fe-oxides. Oxidized Se species were more elevated and residual/elemental Se was lower than previous SEP results from ICP-AES suggested. For soils from the SEP sequence, XANES results indicated only partial recovery of carbonate, Fe-oxide and organic Se. This suggests Se was incompletely removed during designated extractions, possibly due to lack of mineral solubilization or reagent specificity. Selenium fractions associated with Fe-oxides were reduced in amount or removed after using hydroxylamine HCl for most soils examined. XANES results indicate partial dissolution of solid-phases may occur during extraction processes. This study demonstrates why precautions should be taken to improve the validity of SEPs. Mineralogical and chemical characterizations should be completed prior to SEP implementation to identify extractable phases or mineral components that may influence extraction effectiveness. Sequential extraction procedures can be appropriately tailored for reliable quantification of speciation in contaminated soils.« less
Principles of engineering design
Penny, R. K.
1970-01-01
The paper sets out procedures used in engineering design by listing the various steps in a sequential pattern. This pattern is not universally applicable and the variants on it depend on the type of problem involved and the information available. Of critical importance is the way in which models—physical or mathematical—can be constructed and depending on these, three design methods are described. These types are illustrated by reference to a number of medical aids which have been designed. ImagesFig. 3 PMID:5476130
F-4 Beryllium Rudders; A Precis of the Design, Fabrication, Ground and Flight Test Demonstrations
1975-05-01
Wright-Patterson Air Force Base , Ohio 45433. AIR FORCE FLIGHT DYNAMICS LABORATORY AIR FORCE SYSTEMS COMMAND WRIGHT-PATTERSON AIR FORCE BASE , OHIO 45433...rudder. These sequential ground tests include: - A 50,000 cycle fatigue test of upper balance weight support structure. A static test to...Design Details 6. Design Analysis 7. Rudder Mass Balance 8, Rudder Moment of Inertia 9, Rudder Weight RUDDER FABRICATION AND ASSEMBLY 1. 2
Taghizadeh, Ziba; Vedadhir, Abouali; Behmanesh, Fereshteh; Ebadi, Abbas; Pourreza, Abulghasem; Abbasi-Shavazi, Mohammad Jalal
2015-09-18
Nowadays, nearly half of the world population lives in societies with low fertility or the below-replacement fertility. This potentially grounds the critical situation of reduction in the workforce and causes the aging of population due to an overall increase in life expectancy and standard of living. Hence, population and its transitions including the issue of fertility decline has become a topic of intense debate in the agenda-setting and policy-making processes in both the developed and developing countries. In this view, what can practically be done to respond to the fertility decline that entails effectively addressing the determinants of fertility change? In line with the literature, how people form their marriages or patterns of marriage is amongst influencing factors which potentially affect their reproductive practices as diverse societies recognize different conventions for marriage. This study is to examine women's reproductive practices by the various patterns of marriage using the explanatory sequential mixed methods design. This study has an explanatory sequential mixed methods design, the follow-up explanations variant model, with two strands. This design will be implemented in two distinct phases. In the first phase, a cross-sectional quantitative study will be done using a cluster sampling strategy on 850 married women 15-49 years old living in Babol city, Iran. In order to obtain a deeper understanding of the results of the quantitative phase, researchers will implement a qualitative research in the second phase of this study. This design will provide an explanation of the quantitative research results using the qualitative evidence. As patterns of marriage have implications for the status of women, their health and fertility, the result of this study can provide a rich source of information for the required health-related interventions and policies are required to put the demographic changes on the right track at micro and macro level and improve the reproductive practices of women at micro level.
Optimal decision making on the basis of evidence represented in spike trains.
Zhang, Jiaxiang; Bogacz, Rafal
2010-05-01
Experimental data indicate that perceptual decision making involves integration of sensory evidence in certain cortical areas. Theoretical studies have proposed that the computation in neural decision circuits approximates statistically optimal decision procedures (e.g., sequential probability ratio test) that maximize the reward rate in sequential choice tasks. However, these previous studies assumed that the sensory evidence was represented by continuous values from gaussian distributions with the same variance across alternatives. In this article, we make a more realistic assumption that sensory evidence is represented in spike trains described by the Poisson processes, which naturally satisfy the mean-variance relationship observed in sensory neurons. We show that for such a representation, the neural circuits involving cortical integrators and basal ganglia can approximate the optimal decision procedures for two and multiple alternative choice tasks.