Science.gov

Sample records for experimental design techniques

  1. Image processing of correlated data by experimental design techniques

    SciTech Connect

    Stern, D.

    1987-01-01

    New classes of algorithms are developed for processing of two-dimensional image data imbedded in correlated noise. The algorithms are based on modifications of standard analysis of variance (ANOVA) techniques ensuring their proper operation in dependent noise. The approach taken in the development of procedures is deductive. First, the theory of modified ANOVA (MANOVA) techniques involving one- and two-way layouts are considered for noise models with autocorrelation matrix (ACM) formed by direct multiplication of rows and columns or tensored correlation matrices (TCM) stressing the special case of the first-order Markov process. Next, the techniques are generalized to include arbitrary, wide-sense stationary (WSS) processes. This permits dealing with diagonal masks which have ACM of a general form even for TCM. As further extension, the theory of Latin square (LS) masks is generalized to include dependent noise with TCM. This permits dealing with three different effects of m levels using only m{sup 2} observations rather than m{sup 3}. Since in many image-processing problems, replication of data is possible, the masking techniques are generalized to replicated data for which the replication is TCM dependent. For all procedures developed, algorithms are implemented which ensure real-time processing of images.

  2. Chemometric experimental design based optimization techniques in capillary electrophoresis: a critical review of modern applications.

    PubMed

    Hanrahan, Grady; Montes, Ruthy; Gomez, Frank A

    2008-01-01

    A critical review of recent developments in the use of chemometric experimental design based optimization techniques in capillary electrophoresis applications is presented. Current advances have led to enhanced separation capabilities of a wide range of analytes in such areas as biological, environmental, food technology, pharmaceutical, and medical analysis. Significant developments in design, detection methodology and applications from the last 5 years (2002-2007) are reported. Furthermore, future perspectives in the use of chemometric methodology in capillary electrophoresis are considered.

  3. An Artificial Intelligence Technique to Generate Self-Optimizing Experimental Designs.

    DTIC Science & Technology

    1983-02-01

    pattern or a binary chopping technique in the space of decision variables while carrying out a sequence of contiroLled experiments on the strategy ...7 AD-A127 764 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE 1/1 SELF-OPTIMIZING EXPERIME. .(U) ARIZONA STATE UNIV TEMPE GROUP FOR COMPUTER STUDIES...6 3 A - - II 1* Ii.LI~1 11. AI-. jMR.TR- 3 0 3 37 AN ARTIFICIAL INTELLIGENCE TECHNIQUE TO GENERATE SELF-OPTIMIZING EXPERIMENTAL DESIGNS Nicholas V

  4. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  5. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  6. Nonmedical influences on medical decision making: an experimental technique using videotapes, factorial design, and survey sampling.

    PubMed Central

    Feldman, H A; McKinlay, J B; Potter, D A; Freund, K M; Burns, R B; Moskowitz, M A; Kasten, L E

    1997-01-01

    OBJECTIVE: To study nonmedical influences on the doctor-patient interaction. A technique using simulated patients and "real" doctors is described. DATA SOURCES: A random sample of physicians, stratified on such characteristics as demographics, specialty, or experience, and selected from commercial and professional listings. STUDY DESIGN: A medical appointment is depicted on videotape by professional actors. The patient's presenting complaint (e.g., chest pain) allows a range of valid interpretation. Several alternative versions are taped, featuring the same script with patient-actors of different age, sex, race, or other characteristics. Fractional factorial design is used to select a balanced subset of patient characteristics, reducing costs without biasing the outcome. DATA COLLECTION: Each physician is shown one version of the videotape appointment and is asked to describe how he or she would diagnose or treat such a patient. PRINCIPAL FINDINGS: Two studies using this technique have been completed to date, one involving chest pain and dyspnea and the other involving breast cancer. The factorial design provided sufficient power, despite limited sample size, to demonstrate with statistical significance various influences of the experimental and stratification variables, including the patient's gender and age and the physician's experience. Persistent recruitment produced a high response rate, minimizing selection bias and enhancing validity. CONCLUSION: These techniques permit us to determine, with a degree of control unattainable in observational studies, whether medical decisions as described by actual physicians and drawn from a demographic or professional group of interest, are influenced by a prescribed set of nonmedical factors. PMID:9240285

  7. Synthesis of designed materials by laser-based direct metal deposition technique: Experimental and theoretical approaches

    NASA Astrophysics Data System (ADS)

    Qi, Huan

    Direct metal deposition (DMD), a laser-cladding based solid freeform fabrication technique, is capable of depositing multiple materials at desired composition which makes this technique a flexible method to fabricate heterogeneous components or functionally-graded structures. The inherently rapid cooling rate associated with the laser cladding process enables extended solid solubility in nonequilibrium phases, offering the possibility of tailoring new materials with advanced properties. This technical advantage opens the area of synthesizing a new class of materials designed by topology optimization method which have performance-based material properties. For better understanding of the fundamental phenomena occurring in multi-material laser cladding with coaxial powder injection, a self-consistent 3-D transient model was developed. Physical phenomena including laser-powder interaction, heat transfer, melting, solidification, mass addition, liquid metal flow, and species transportation were modeled and solved with a controlled-volume finite difference method. Level-set method was used to track the evolution of liquid free surface. The distribution of species concentration in cladding layer was obtained using a nonequilibrium partition coefficient model. Simulation results were compared with experimental observations and found to be reasonably matched. Multi-phase material microstructures which have negative coefficients of thermal expansion were studied for their DMD manufacturability. The pixel-based topology-optimal designs are boundary-smoothed by Bezier functions to facilitate toolpath design. It is found that the inevitable diffusion interface between different material-phases degrades the negative thermal expansion property of the whole microstructure. A new design method is proposed for DMD manufacturing. Experimental approaches include identification of laser beam characteristics during different laser-powder-substrate interaction conditions, an

  8. Optimization and enhancement of soil bioremediation by composting using the experimental design technique.

    PubMed

    Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni

    2010-06-01

    The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.

  9. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  10. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  11. Benefits applications and data analysis techniques for linewidth multilevel experimental design

    NASA Astrophysics Data System (ADS)

    Barbieri, Anthony

    1996-05-01

    In contrast to other experimental methods which have just two or three settings per variable, the rationale is presented for using a large number of stepper exposures at poly or active area for certain applications (such as obtaining high correlation to E-TEST variables). How variables (which are dependent on linewidth) relate to each other can also be determined to high correlation; even linear correlation of measured poly linewidth to speed had an R2 value of 0.96. This experimental method is useful for numerous applications such as: process characterization, budgeting of CD linewidths, and correlating process variables to electrical data. Useful data analysis techniques are also shown. The experimental method is also cost- effective, requiring a small number of wafers.

  12. Development of a fast, lean and agile direct pelletization process using experimental design techniques.

    PubMed

    Politis, Stavros N; Rekkas, Dimitrios M

    2017-04-01

    A novel hot melt direct pelletization method was developed, characterized and optimized, using statistical thinking and experimental design tools. Mixtures of carnauba wax (CW) and HPMC K100M were spheronized using melted gelucire 50-13 as a binding material (BM). Experimentation was performed sequentially; a fractional factorial design was set up initially to screen the factors affecting the process, namely spray rate, quantity of BM, rotor speed, type of rotor disk, lubricant-glidant presence, additional spheronization time, powder feeding rate and quantity. From the eight factors assessed, three were further studied during process optimization (spray rate, quantity of BM and powder feeding rate), at different ratios of the solid mixture of CW and HPMC K100M. The study demonstrated that the novel hot melt process is fast, efficient, reproducible and predictable. Therefore, it can be adopted in a lean and agile manufacturing setting for the production of flexible pellet dosage forms with various release rates easily customized between immediate and modified delivery.

  13. Comparison of neuropsychological rehabilitation techniques for unilateral neglect: an ABACADAEAF single-case experimental design.

    PubMed

    Tunnard, Catherine; Wilson, Barbara A

    2014-01-01

    Unilateral neglect is a debilitating attentional disorder whereby patients fail to report, respond or orient to information presented on one side of space. Previous studies have demonstrated improvements in neglect symptoms using rehabilitation techniques, such as anchoring or limb activation. We investigated the effectiveness of five interventions in reducing the unilateral neglect observed in patient F.P. A single-case ABACADAEAF design was used to investigate the effectiveness of musical stimulation (B), anchoring (C), vibratory stimulation (D), limb activation (E), and anchoring and vibratory stimulation combined (F), compared to baseline (A). Severity of neglect was measured using star cancellation, line crossing and line bisection tests. Tau-U statistical analyses were used to investigate significant differences between conditions. All interventions resulted in improvements in F.P.'s neglect. Anchoring (C), vibratory stimulation (D) and the combination of these two techniques (F) led to greatest improvements on all three tests of neglect. Musical stimulation led to improvements on the line bisection task only. Anchoring and vibratory stimulation were the most effective techniques for reducing neglect for this patient. Further research is needed to investigate whether the observed gains can be sustained on a longer-term basis, generalised to other tasks, and replicated in larger samples.

  14. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  15. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Astrophysics Data System (ADS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-09-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  16. Effect of an experimental design for evaluating the nonlinear optimal formulation of theophylline tablets using a bootstrap resampling technique.

    PubMed

    Arai, Hiroaki; Suzuki, Tatsuya; Kaseda, Chosei; Takayama, Kozo

    2009-06-01

    The optimal solutions of theophylline tablet formulations based on datasets from 4 experimental designs (Box and Behnken design, central composite design, D-optimal design, and full factorial design) were calculated by the response surface method incorporating multivariate spline interpolation (RSM(S)). Reliability of these solutions was evaluated by a bootstrap (BS) resampling technique. The optimal solutions derived from the Box and Behnken design, D-optimal design, and full factorial design dataset were similar. The distributions of the BS optimal solutions calculated for these datasets were symmetrical. Thus, the accuracy and the reproducibility of the optimal solutions enabled quantitative evaluation based on the deviations of these distributions. However, the distribution of the BS optimal solutions calculated for the central composite design dataset were almost unsymmetrical, and the basic statistic of these distributions could not be conducted. The reason for this problem was considered to be the mixing of the global and local optima. Therefore, self-organizing map (SOM) clustering was applied to identify the global optimal solutions. The BS optimal solutions were divided into 4 clusters by SOM clustering, the accuracy and reproducibility of the optimal solutions in each cluster were quantitatively evaluated, and the cluster containing the global optima was identified. Therefore, SOM clustering was considered to reinforce the BS resampling method for the evaluation of the reliability of optimal solutions irrespective of the dataset style.

  17. Axisymmetric and non-axisymmetric exhaust jet induced effects on a V/STOL vehicle design. Part 3: Experimental technique

    NASA Technical Reports Server (NTRS)

    Schnell, W. C.

    1982-01-01

    The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.

  18. Modern Experimental Techniques in Turbine Engine Testing

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Bencic, T. J.; Braunscheidel, E. P.

    1996-01-01

    The paper describes application of two modern experimental techniques, thin-film thermocouples and pressure sensitive paint, to measurement in turbine engine components. A growing trend of using computational codes in turbomachinery design and development requires experimental techniques to refocus from overall performance testing to acquisition of detailed data on flow and heat transfer physics to validate these codes for design applications. The discussed experimental techniques satisfy this shift in focus. Both techniques are nonintrusive in practical terms. The thin-film thermocouple technique improves accuracy of surface temperature and heat transfer measurements. The pressure sensitive paint technique supplies areal surface pressure data rather than discrete point values only. The paper summarizes our experience with these techniques and suggests improvements to ease the application of these techniques for future turbomachinery research and code verifications.

  19. Experimental techniques and measurement accuracies

    SciTech Connect

    Bennett, E.F.; Yule, T.J.; DiIorio, G.; Nakamura, T.; Maekawa, H.

    1985-02-01

    A brief description of the experimental tools available for fusion neutronics experiments is given. Attention is paid to error estimates mainly for the measurement of tritium breeding ratio in simulated blankets using various techniques.

  20. Experimental design and husbandry.

    PubMed

    Festing, M F

    1997-01-01

    Rodent gerontology experiments should be carefully designed and correctly analyzed so as to provide the maximum amount of information for the minimum amount of work. There are five criteria for a "good" experimental design. These are applicable both to in vivo and in vitro experiments: (1) The experiment should be unbiased so that it is possible to make a true comparison between treatment groups in the knowledge that no one group has a more favorable "environment." (2) The experiment should have high precision so that if there is a true treatment effect there will be a good chance of detecting it. This is obtained by selecting uniform material such as isogenic strains, which are free of pathogenic microorganisms, and by using randomized block experimental designs. It can also be increased by increasing the number of observations. However, increasing the size of the experiment beyond a certain point will only marginally increase precision. (3) The experiment should have a wide range of applicability so it should be designed to explore the sensitivity of the observed experimental treatment effect to other variables such as the strain, sex, diet, husbandry, and age of the animals. With in vitro data, variables such as media composition and incubation times may also be important. The importance of such variables can often be evaluated efficiently using "factorial" experimental designs, without any substantial increase in the overall number of animals. (4) The experiment should be simple so that there is little chance of groups becoming muddled. Generally, formal experimental designs that are planned before the work starts should be used. (5) The experiment should provide the ability to calculate uncertainty. In other words, it should be capable of being statistically analyzed so that the level of confidence in the results can be quantified.

  1. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  2. Analysis of a PEMFC durability test under low humidity conditions and stack behaviour modelling using experimental design techniques

    NASA Astrophysics Data System (ADS)

    Wahdame, Bouchra; Candusso, Denis; Harel, Fabien; François, Xavier; Péra, Marie-Cécile; Hissel, Daniel; Kauffmann, Jean-Marie

    A polymer electrolyte membrane fuel cell (PEMFC) stack has been operated under low humidity conditions during 1000 h. The fuel cell characterisation is based both on polarisation curves and electrochemical impedance spectra recorded for various stoichiometry rates, performed regularly throughout the ageing process. Some design of experiment (DoE) techniques, and in particular the response surface methodology (RSM), are employed to analyse the results of the ageing test and to propose some numerical/statistical laws for the modelling of the stack performance degradation. These mathematical relations are used to optimise the fuel cell operating conditions versus ageing time and to get a deeper understanding of the ageing mechanisms. The test results are compared with those obtained from another stack operated in stationary regime at roughly nominal conditions during 1000 h (reference test). The final objective is to ensure for the next fuel cell systems proper operating conditions leading to extended lifetimes.

  3. Seismic Monitoring with Small Aperture Arrays Under Strong Noise Conditions: Algorithms, Technique, System Design and Experimental Data Processing

    DTIC Science & Technology

    1995-05-01

    tasking Moscow Iris Data Center as follows: Investigate array processing techniques and apply them to small seismic events and to events with weak surface...28 7. Estimation of plane wave apparent velocities based on data from three...8 9 1 .7 Program "arloc": event locating based on single array data

  4. Model for vaccine design by prediction of B-epitopes of IEDB given perturbations in peptide sequence, in vivo process, experimental techniques, and source or host organisms.

    PubMed

    González-Díaz, Humberto; Pérez-Montoto, Lázaro G; Ubeira, Florencio M

    2014-01-01

    Perturbation methods add variation terms to a known experimental solution of one problem to approach a solution for a related problem without known exact solution. One problem of this type in immunology is the prediction of the possible action of epitope of one peptide after a perturbation or variation in the structure of a known peptide and/or other boundary conditions (host organism, biological process, and experimental assay). However, to the best of our knowledge, there are no reports of general-purpose perturbation models to solve this problem. In a recent work, we introduced a new quantitative structure-property relationship theory for the study of perturbations in complex biomolecular systems. In this work, we developed the first model able to classify more than 200,000 cases of perturbations with accuracy, sensitivity, and specificity >90% both in training and validation series. The perturbations include structural changes in >50000 peptides determined in experimental assays with boundary conditions involving >500 source organisms, >50 host organisms, >10 biological process, and >30 experimental techniques. The model may be useful for the prediction of new epitopes or the optimization of known peptides towards computational vaccine design.

  5. Design and experimental demonstration of low-power CMOS magnetic cell manipulation platform using charge recycling technique

    NASA Astrophysics Data System (ADS)

    Niitsu, Kiichi; Yoshida, Kohei; Nakazato, Kazuo

    2016-03-01

    We present the world’s first charge-recycling-based low-power technique of complementary metal-oxide-semiconductor (CMOS) magnetic cell manipulation. CMOS magnetic cell manipulation associated with magnetic beads is a promissing tool for on-chip biomedical-analysis applications such as drug screening because CMOS can integrate control electronics and electro-chemical sensors. However, the conventional CMOS cell manipulation requires considerable power consumption. In this work, by concatenating multiple unit circuits and recycling electric charge among them, power consumption is reduced by a factor of the number of the concatenated unit circuits (1/N). For verifying the effectiveness, test chip was fabricated in a 0.6-µm CMOS. The chip successfully manipulates magnetic microbeads with achieving 49% power reduction (from 51 to 26.2 mW). Even considering the additional serial resistance of the concatenated inductors, nearly theoretical power reduction effect can be confirmed.

  6. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  7. Digital Filter Design Techniques.

    DTIC Science & Technology

    1988-03-01

    McClellan, and the Minimum p - Error IIR Filter Design Method of Deczky. Acceso Fo S CRA&!I DIC TAd [8 13v i . , . a.- II **’. . ’uaJI r -TABLE OF ,CONTENT...NFILT-- FILTE LENGTH C dUYPE-- TYPE OF FILIP C I MULTIPLE PASSbAND/STOPHASZ P11151 C =DIYFEZlNTIATCP C 3 HILBEET DANSFCRZ PELTE2 C NiANDS-- NUEBEi Of

  8. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  9. A case study integrating CBT with narrative therapy externalizing techniques with a child with OCD: How to flush away the Silly Gremlin. A single-case experimental design.

    PubMed

    Banting, Rosemary; Lloyd, Susannah

    2017-05-29

    Evidence exists for the use of cognitive behavioral therapy (CBT) combined with externalizing techniques from narrative therapy for pediatric obsessive compulsive disorder (OCD); however, no research gives a detailed account of what the externalizing process looks like in session or how it is incorporated into conceptualization. Literature is appraised with respect to the referral, assessment, formulation, intervention and outcome. The case describes a 10-year-old boy who was referred with severe OCD. The evidence-based CBT model for OCD in child and adolescent populations was applied to the case. This was integrated with the externalizing technique from narrative therapy. Using these models, a shared formulation of the difficulties was developed, and created a new narrative. The intervention was assessed using the single-case experimental design. On all but one routine outcome measure positive clinically significant changes were made, and the young person managed to reach his therapeutic goals. Gains maintained over a month follow-up period. The use of externalizing was an effective and developmentally appropriate intervention and is discussed further. The case highlighted the need for more research detailing externalizing processes. © 2017 Wiley Periodicals, Inc.

  10. Design approaches to experimental mediation☆

    PubMed Central

    Pirlott, Angela G.; MacKinnon, David P.

    2016-01-01

    Identifying causal mechanisms has become a cornerstone of experimental social psychology, and editors in top social psychology journals champion the use of mediation methods, particularly innovative ones when possible (e.g. Halberstadt, 2010, Smith, 2012). Commonly, studies in experimental social psychology randomly assign participants to levels of the independent variable and measure the mediating and dependent variables, and the mediator is assumed to causally affect the dependent variable. However, participants are not randomly assigned to levels of the mediating variable(s), i.e., the relationship between the mediating and dependent variables is correlational. Although researchers likely know that correlational studies pose a risk of confounding, this problem seems forgotten when thinking about experimental designs randomly assigning participants to levels of the independent variable and measuring the mediator (i.e., “measurement-of-mediation” designs). Experimentally manipulating the mediator provides an approach to solving these problems, yet these methods contain their own set of challenges (e.g., Bullock, Green, & Ha, 2010). We describe types of experimental manipulations targeting the mediator (manipulations demonstrating a causal effect of the mediator on the dependent variable and manipulations targeting the strength of the causal effect of the mediator) and types of experimental designs (double randomization, concurrent double randomization, and parallel), provide published examples of the designs, and discuss the strengths and challenges of each design. Therefore, the goals of this paper include providing a practical guide to manipulation-of-mediator designs in light of their challenges and encouraging researchers to use more rigorous approaches to mediation because manipulation-of-mediator designs strengthen the ability to infer causality of the mediating variable on the dependent variable. PMID:27570259

  11. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  12. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  13. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  14. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  15. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  16. The novel chamber hardware design to improve the thin film deposition quality in both 12″ (300 mm) and 18″ (450 mm) wafers with the development of 3D full chamber modeling and experimental visual technique

    NASA Astrophysics Data System (ADS)

    Liao, M.-H.; Chen, C.-H.

    2013-07-01

    The thin film deposition property and the process difference during the wafer size migration from 12″ (300 mm) to 18″ (450 mm) in the Chemical Vapor Deposition (CVD) equipment is improved and reduced, respectively, when the chamber hardware is designed with the help of 3D full chamber modeling and 3D experimental visual technique developed in this work. The accuracy of 3D chamber simulation model is demonstrated with the experimental visual technique measurement. With the CVD chamber hardware design of placing the inlet position and optimizing the distance between the susceptor edge and the reactor wall, the better thin film deposition property and the larger process compatibility during the wafer size migration from 12″ (300 mm) to 18″ (450 mm) for the industry cost reduction can be achieved. Non-dimensional Nusselt parameter is also found to be the effective indicator to monitor the thin film deposition property.

  17. Statistical problems in design technique validation

    SciTech Connect

    Cohen, J.S.

    1980-04-01

    This work is concerned with the statistical validation process for measuring the accuracy of design techniques for solar energy systems. This includes a discussion of the statistical variability inherent in the design and measurement processes and the way in which this variability can dictate the choice of experimental design, choice of data, accuracy of the results, and choice of questions that can be reliably answered in such a study. The approach here is primarily concerned with design procedure validation in the context of the realistic process of system desig, where the discrepancy between measured and predicted results is due to limitations in the mathematical models employed by the procedures and the inaccuracies of input data. A set of guidelines for successful validation methodologies is discussed, and a simplified validation methodology for domestic hot water heaters is presented.

  18. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  19. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  20. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  1. Aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1981-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Potential theory was examined in detail to meet this objective. Numerical pilot codes were developed for relatively simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body, and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one.

  2. Minimisation of instrumental noise in the acquisition of FT-NIR spectra of bread wheat using experimental design and signal processing techniques.

    PubMed

    Foca, G; Ferrari, C; Sinelli, N; Mariotti, M; Lucisano, M; Caramanico, R; Ulrici, A

    2011-02-01

    Spectral resolution (R) and number of repeated scans (S) have a significant effect on the S/N ratio of Fourier transform-near infrared (FT-NIR) spectra, but the optimal values of these two parameters have to be determined empirically for a specific problem, considering separately both the nature of the analysed matrix and the specific instrumental setup. To achieve this aim, the instrumental noise of replicated FT-NIR spectra of wheat samples was modelled as a function of R and S by means of the Doehlert design. The noise amounts in correspondence to different experimental conditions were estimated by analysing the variance signals derived from replicate measurements with two different signal processing tools, Savitzky-Golay (SG) filtering and fast wavelet transform (FWT), in order to separate the "pure" instrumental noise from other variability sources, which are essentially connected to sample inhomogeneity. Results confirmed that R and S values leading to minimum instrumental noise can vary considerably depending on the type of analysed food matrix and on the different instrumental setups, and helped in the selection of the optimal measuring conditions for the subsequent acquisition of a wide spectral dataset.

  3. Orbit determination error analysis and comparison of station-keeping costs for Lissajous and halo-type libration point orbits and sensitivity analysis using experimental design techniques

    NASA Technical Reports Server (NTRS)

    Gordon, Steven C.

    1993-01-01

    Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.

  4. Design for reliability of BEoL and 3-D TSV structures – A joint effort of FEA and innovative experimental techniques

    SciTech Connect

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-19

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That’s why the authors pushed combined simulative/experimental approaches to extract the Young’s-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  5. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  6. Optimisation of supercritical fluid extraction of indole alkaloids from Catharanthus roseus using experimental design methodology--comparison with other extraction techniques.

    PubMed

    Verma, Arvind; Hartonen, Kari; Riekkola, Marja-Liisa

    2008-01-01

    Response surface modelling, using MODDE 6 software for Design of Experiments and Optimisation, was applied to optimise supercritical fluid extraction (SFE) conditions for the extraction of indole alkaloids from the dried leaves of Catharanthus roseus. The effects of pressure (200-400 bar), temperature (40-80 degrees C), modifier concentration (2.2-6.6 vol%) and dynamic extraction time (20-60 min) on the yield of alkaloids were evaluated. The extracts were analysed by high-performance liquid chromatography and the analytes were identified using ion trap-electrospray ionisation-mass spectrometry. The method was linear for alkaloid concentration in the range 0.18-31 microg/mL. The limits of detection and quantification for catharanthine, vindoline, vinblastine and vincristine were 0.2, 0.15, 0.1 and 0.08 microg/mL and 2.7, 2.0, 1.3 and 1.1 microg/g, respectively. The dry weight content of major alkaloids in the plants were compared using different extraction methods, i.e. SFE, Soxhlet extraction, solid-liquid extraction with sonication and hot water extraction at various temperatures. The extraction techniques were also compared in terms of reproducibility, selectivity and analyte recoveries. Relative standard deviations for the major alkaloids varied from 4.1 to 17.5% in different extraction methods. The best recoveries (100%) for catharanthine were obtained by SFE at 250 bar and 80 degrees C using 6.6 vol% methanol as modifier for 40 min, for vindoline by Soxhlet extraction using dichloromethane in a reflux for 16 h, and for 3',4'-anhydrovinblastine by solid-liquid extraction using a solution of 0.5 m sulphuric acid and methanol (3:1 v/v) in an ultrasonic bath for 3 h.

  7. Sequential experimental design based generalised ANOVA

    SciTech Connect

    Chakraborty, Souvik Chowdhury, Rajib

    2016-07-15

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  8. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  9. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  10. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  11. CMOS array design automation techniques

    NASA Technical Reports Server (NTRS)

    Lombardi, T.; Feller, A.

    1976-01-01

    The design considerations and the circuit development for a 4096-bit CMOS SOS ROM chip, the ATL078 are described. Organization of the ATL078 is 512 words by 8 bits. The ROM was designed to be programmable either at the metal mask level or by a directed laser beam after processing. The development of a 4K CMOS SOS ROM fills a void left by available ROM chip types, and makes the design of a totally major high speed system more realizable.

  12. Winglet design using multidisciplinary design optimization techniques

    NASA Astrophysics Data System (ADS)

    Elham, Ali; van Tooren, Michel J. L.

    2014-10-01

    A quasi-three-dimensional aerodynamic solver is integrated with a semi-analytical structural weight estimation method inside a multidisciplinary design optimization framework to design and optimize a winglet for a passenger aircraft. The winglet is optimized for minimum drag and minimum structural weight. The Pareto front between those two objective functions is found applying a genetic algorithm. The aircraft minimum take-off weight and the aircraft minimum direct operating cost are used to select the best winglets among those on the Pareto front.

  13. Optimizing Experimental Designs: Finding Hidden Treasure.

    USDA-ARS?s Scientific Manuscript database

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  14. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Steiner, Peter M.; Kim, Yongnam; Hall, Courtney E.; Su, Dan

    2017-01-01

    Randomized controlled trials (RCTs) and quasi-experimental designs like regression discontinuity (RD) designs, instrumental variable (IV) designs, and matching and propensity score (PS) designs are frequently used for inferring causal effects. It is well known that the features of these designs facilitate the identification of a causal estimand…

  15. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  16. Telecommunications Systems Design Techniques Handbook

    NASA Technical Reports Server (NTRS)

    Edelson, R. E. (Editor)

    1972-01-01

    The Deep Space Network (DSN) increasingly supports deep space missions sponsored and managed by organizations without long experience in DSN design and operation. The document is intended as a textbook for those DSN users inexperienced in the design and specification of a DSN-compatible spacecraft telecommunications system. For experienced DSN users, the document provides a reference source of telecommunication information which summarizes knowledge previously available only in a multitude of sources. Extensive references are quoted for those who wish to explore specific areas more deeply.

  17. Experimental Investigation of Centrifugal Compressor Stabilization Techniques

    NASA Technical Reports Server (NTRS)

    Skoch, Gary J.

    2003-01-01

    Results from a series of experiments to investigate techniques for extending the stable flow range of a centrifugal compressor are reported. The research was conducted in a high-speed centrifugal compressor at the NASA Glenn Research Center. The stabilizing effect of steadily flowing air-streams injected into the vaneless region of a vane-island diffuser through the shroud surface is described. Parametric variations of injection angle, injection flow rate, number of injectors, injector spacing, and injection versus bleed were investigated for a range of impeller speeds and tip clearances. Both the compressor discharge and an external source were used for the injection air supply. The stabilizing effect of flow obstructions created by tubes that were inserted into the diffuser vaneless space through the shroud was also investigated. Tube immersion into the vaneless space was varied in the flow obstruction experiments. Results from testing done at impeller design speed and tip clearance are presented. Surge margin improved by 1.7 points using injection air that was supplied from within the compressor. Externally supplied injection air was used to return the compressor to stable operation after being throttled into surge. The tubes, which were capped to prevent mass flux, provided 9.3 points of additional surge margin over the baseline surge margin of 11.7 points.

  18. Experimental Design for the Evaluation of Detection Techniques of Hidden Corrosion Beneath the Thermal Protective System of the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Kemmerer, Catherine C.; Jacoby, Joseph A.; Lomness, Janice K.; Hintze, Paul E.; Russell, Richard W.

    2007-01-01

    The detection of corrosion beneath Space Shuttle Orbiter thermal protective system is traditionally accomplished by removing the Reusable Surface Insulation tiles and performing a visual inspection of the aluminum substrate and corrosion protection system. This process is time consuming and has the potential to damage high cost tiles. To evaluate non-intrusive NDE methods, a Proof of Concept (PoC) experiment was designed and test panels were manufactured. The objective of the test plan was three-fold: establish the ability to detect corrosion hidden from view by tiles; determine the key factor affecting detectability; roughly quantify the detection threshold. The plan consisted of artificially inducing dimensionally controlled corrosion spots in two panels and rebonding tile over the spots to model the thermal protective system of the orbiter. The corrosion spot diameter ranged from 0.100" to 0.600" inches and the depth ranged from 0.003" to 0.020". One panel consisted of a complete factorial array of corrosion spots with and without tile coverage. The second panel consisted of randomized factorial points replicated and hidden by tile. Conventional methods such as ultrasonics, infrared, eddy current and microwave methods have shortcomings. Ultrasonics and IR cannot sufficiently penetrate the tiles, while eddy current and microwaves have inadequate resolution. As such, the panels were interrogated using Backscatter Radiography and Terahertz Imaging. The terahertz system successfully detected artificially induced corrosion spots under orbiter tile and functional testing is in-work in preparation for implementation.

  19. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  20. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  1. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  2. Experimental Comparison of Efficacy for Three Handfeeding Techniques in Dementia.

    PubMed

    Batchelor-Murphy, Melissa K; McConnell, Eleanor S; Amella, Elaine J; Anderson, Ruth A; Bales, Connie W; Silva, Susan; Barnes, Angel; Beck, Cornelia; Colon-Emeric, Cathleen S

    2017-04-01

    Nursing home (NH) residents who require assistance during mealtimes are at risk for malnutrition. Supportive handfeeding is recommended, yet there is limited evidence supporting use of a specific handfeeding technique to increase meal intake. To compare efficacy of three handfeeding techniques for assisting NH residents with dementia with meals: Direct Hand (DH), Over Hand (OH), and Under Hand (UH). A prospective pilot study using a within-subjects experimental Latin square design with randomization to one of three handfeeding technique sequences. 30 residents living with advanced dementia in 11 U.S. NHs. Time required for assistance; meal intake (% eaten); and feeding behaviors, measured by the Edinburgh Feeding Evaluation in Dementia (EdFED) scale. Research Assistants provided feeding assistance for 18 video-recorded meals per resident (N = 540 meals). Residents were assisted with one designated technique for 6 consecutive meals, changing technique every 2 days. Mean time spent providing meal assistance did not differ significantly between techniques. Mean meal intake was greater for DH (67 ± 15.2%) and UH (65 ± 15.0%) with both significantly greater than OH (60 ± 15.1%). Feeding behaviors were more frequent with OH (8.3 ± 1.8%), relative to DH (8.0 ± 1.8) and UH (7.7 ± 1.8). All three techniques are time neutral. UH and DH are viable options to increase meal intake among NH residents with advanced dementia and reduce feeding behaviors relative to OH feeding. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  3. Experimental Design: Review and Comment.

    DTIC Science & Technology

    1984-02-01

    and early work in the subject was done by Wald (1943), Hotelling (1944), and Elfving (1952). The major contributions to the area, however, were made by...Kiefer (1958, 1959) and Kiefer and Wolfowitz (1959, 1960), who synthesized and greatly extended the previous work. Although the ideas of optimal...design theory is the general equivalence theorem (Kiefer and Wolfowitz 1960), which links D- and G-optimality. The theorem is phrased in terms of

  4. Light Experimental Supercruiser Conceptual Design

    DTIC Science & Technology

    1976-07-01

    Contract F33615-75-C-3150. Wind tunnel tests were conducted under a Boeing independent research program to verify performance results. The Light...Two-Dimensional Airframe Integrated Exhaust System. The air vehicle design 1s based on NASA SCAT 15 arrow wing research and 1s powered by a modified...133 CANARD CONFIGURATION - LIFT, DRAG AND MOMENT CHARACTERISTICS, M = 1.8 131 CANARD CONFIGURATION - NON -LIFTING DRAG COMPARISON 135 CANARD

  5. Eigenspace design techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Garrard, W. L.; Liebst, B. S.

    1984-01-01

    The application of eigenspace design techniques to an active flutter suppression system for the DAST ARW-2 research drone is examined. Eigenspace design techniques allow the control system designer to determine feedback gains which place controllable eigenvalues in specified configurations and which shape eigenvectors to achieve desired dynamic response. Eigenspace techniques were applied to the control of lateral and longitudinal dynamic response of aircraft. However, little was published on the application of eigenspace techniques to aeroelastic control problems. This discussion will focus primarily on methodology for design of full-state and limited-state (output) feedback controllers. Most of the states in aeroelastic control problems are not directly measurable, and some type of dynamic compensator is necessary to convert sensor outputs to control inputs. Compensator design are accomplished by use of a Kalman filter modified if necessary by the Doyle-Stein procedure for full-state loop transfer function recovery, by some other type of observer, or by transfer function matching.

  6. Experimental Design For Photoresist Characterization

    NASA Astrophysics Data System (ADS)

    Luckock, Larry

    1987-04-01

    In processing a semiconductor product (from discrete devices up to the most complex products produced) we find more photolithographic steps in wafer fabrication than any other kind of process step. Thus, the success of a semiconductor manufacturer hinges on the optimization of their photolithographic processes. Yet, we find few companies that have taken the time to properly characterize this critical operation; they are sitting in the "passenger's seat", waiting to see what will come out, hoping that the yields will improve someday. There is no "black magic" involved in setting up a process at its optimum conditions (i.e. minimum sensitivity to all variables at the same time). This paper gives an example of a real world situation for optimizing a photolithographic process by the use of a properly designed experiment, followed by adequate multidimensional analysis of the data. Basic SPC practices like plotting control charts will not, by themselves, improve yields; the control charts are, however, among the necessary tools used in the determination of the process capability and in the formulation of the problems to be addressed. The example we shall consider is the twofold objective of shifting the process average, while tightening the variance, of polysilicon line widths. This goal was identified from a Pareto analysis of yield-limiting mechanisms, plus inspection of the control charts. A key issue in a characterization of this type of process is the number of interactions between variables; this example rules out two-level full factorial and three-level fractional factorial designs (which cannot detect all of the interactions). We arrive at an experiment with five factors at five levels each. A full factorial design for five factors at three levels would require 3125 wafers. Instead, we will use a design that allows us to run this experiment with only 25 wafers, for a significant reduction in time, materials and manufacturing interruption in order to complete the

  7. New Theoretical Technique for Alloy Design

    NASA Technical Reports Server (NTRS)

    Ferrante, John

    2005-01-01

    During the last 2 years, there has been a breakthrough in alloy design at the NASA Lewis Research Center. A new semi-empirical theoretical technique for alloys, the BFS Theory (Bozzolo, Ferrante, and Smith), has been used to design alloys on a computer. BFS was used, along with Monte Carlo techniques, to predict the phases of ternary alloys of NiAl with Ti or Cr additions. High concentrations of each additive were used to demonstrate the resulting structures.

  8. Experimental Techniques Applicable to Turbulent Flows.

    DTIC Science & Technology

    1977-01-01

    dent laser , and Stokesplier, e the electron charge, R the load radiation respectively, I ~, is the co— resistance , E the energy of the scattered... measurements of methane , Sca ttering of a Laser Beam ”, A IAA J. 9— using the spontaneous Ramars effect and the 1971., PIBAL Rep. No. 69—46, Nov...developed Laser Raman and of several species of interest in a flame, Laser Doppler techniques may be ideally their individual temperatures as well as

  9. Techniques in Experimental Mechanics Applicable to Forest Products Research

    Treesearch

    Leslie H. Groom; Audrey G. Zink

    1994-01-01

    The title of this publication-Techniques in Experimental Mechanics Applicable to Forest Products Research-is the theme of this plenary session from the 1994 Annual Meeting of the Forest Products Society (FPS). Although this session focused on experimental techniques that can be of assistance to researchers in the field of forest products, it is hoped that the...

  10. Experimental and numerical techniques to assess catalysis

    NASA Astrophysics Data System (ADS)

    Herdrich, G.; Fertig, M.; Petkow, D.; Steinbeck, A.; Fasoulas, S.

    2012-01-01

    Catalytic heating can be a significant portion of the thermal load experienced by a body during re-entry. Under the auspices of the NATO Research and Technology Organisation Applied Vehicle Technologies Panel Task Group AVT-136 an assessment of the current state-of-the-art in the experimental characterization and numerical simulation of catalysis on high-temperature material surfaces has been conducted. This paper gives an extraction of the final report for this effort, showing the facilities and capabilities worldwide to assess catalysis data. A corresponding summary for the modeling activities is referenced in this article.

  11. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES

    SciTech Connect

    J. R. KAMM; ET AL

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i. e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. (13), which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  12. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES.

    SciTech Connect

    Kamm, J. R.; Rider, William; Rightley, P. M.; Prestridge, K. P.; Benjamin, R. F.; Vorobieff, P. V.

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i.e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. [13], which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  13. Atmospheric Chemistry: Fundamentals and Experimental Techniques

    NASA Astrophysics Data System (ADS)

    Ronneau, C.

    This book by B. J. Finlayson-Pitts and J. N. Pitts appears at a time when atmospheric chemistry has reached a remarkable achievement. This relatively new discipline was given its first impetus in the 1950s, when Haagen-Smit and his coworkers published their nowclassical papers about the Los Angeles photochemical smog. In less than 3 decades, atmospheric chemistry has matured and is now able to cope with the major challenges of air pollution. It has grown into an elaborate science, covering a broad range of experimental and theoretical approaches.

  14. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  15. Experimental model of intravitreal injection techniques.

    PubMed

    Hubschman, Jean-Pierre; Coffee, Robert E; Bourges, Jean-Louis; Yu, Fei; Schwartz, Steven D

    2010-01-01

    To evaluate the amount of drug reflux and vitreous leakage from the needle tract after various intravitreal (IVT) injection techniques in porcine cadaver eyes. The reflux after IVT injection was quantified by methylene blue injection through the pars plana of fresh pig eyes (0.05 mL per eye, n = 150) and the vitreous incarceration measured after balanced salt solution (BSS) IVT injection (0.05 mL per eye, n = 150) into eyes with vitreous previously stained with methylene blue. Blue spots observed on the ocular surface after injection quantified both reflux and vitreous incarceration. We tested different needle sizes (27, 30, and 32 gauge) and different techniques (depth and speed of injection). We used an ocular endoscope to observe the flow and diffusion of injected methylene blue and the vitreous incarceration at the puncture site after IVT injection using the different techniques. Thirty-gauge needles showed less drug reflux than the 32-gauge or 27-gauge needles (P < 0.01). Thirty-two-gauge needles demonstrated less incarceration of vitreous at the tract site (P < 0.01), but with the endoscope, all needle tracts showed vitreous incarceration at their internal aspect. Deep IVT injection showed less reflux than superficial IVT injection, but vitreous incarceration did not differ. The delay between the scleral puncture and the injection did not modify the reflux or the vitreous incarceration. Thirty-gauge needles and deep placement of the needle tip into the vitreous before injection may reduce reflux and vitreous incarceration. This could maximize the therapeutic effect of IVT injection and may decrease the rates of severe complications such as retinal detachment and endophthalmitis.

  16. Practical Techniques for Language Design and Prototyping

    DTIC Science & Technology

    2005-01-01

    CIAO, a Calculus of Imperative Active Objects, a core language for concurrent object-oriented programming . It is especially designed to allow the...COVERED - 4. TITLE AND SUBTITLE Practical Techniques for Language Design and Prototyping 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ...concurrent object-oriented programming . It is especially designed to allow the representation of practically relevant sublanguages of common object

  17. Digital imaging techniques in experimental stress analysis

    NASA Technical Reports Server (NTRS)

    Peters, W. H.; Ranson, W. F.

    1982-01-01

    Digital imaging techniques are utilized as a measure of surface displacement components in laser speckle metrology. An image scanner which is interfaced to a computer records and stores in memory the laser speckle patterns of an object in a reference and deformed configuration. Subsets of the deformed images are numerically correlated with the references as a measure of surface displacements. Discrete values are determined around a closed contour for plane problems which then become input into a boundary integral equation method in order to calculate surface traction in the contour. Stresses are then calculated within this boundary. The solution procedure is illustrated by a numerical example of a case of uniform tension.

  18. Advanced Experimental Techniques in Crack Tip Analysis.

    DTIC Science & Technology

    1983-06-01

    The shear stress in the xy plane is then given by B sin 20 (3) axy =2 3 Clark, Mignogna and Sanford E18 ) used the above relations to measure the...directly using crack tip measurements in contrast to the ASTM 12 _ designated far-field procedure which is based on many simplifying assumptions. 2-0...effect of Crack Front Curvature in an ASTM Compact Tension Speciment", Proc. of the Fourth Brazilian Congress of Mechanical Engineering, 1977, pp 13 - 26

  19. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism.

  20. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  1. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  2. Symposium: experimental design for poultry production and genomics research.

    PubMed

    Pesti, Gene M; Aggrey, Samuel E; Fancher, Bryan I

    2013-09-01

    This symposium dealt with the theoretical and practical aspects of choosing and evaluating experimental designs, and how experimental results may be related to poultry production through modeling. Additionally, recent advances in techniques for generating high-throughput genomic sequencing data, genomic breeding values, genomics selection, and genome-wide association studies have provided unique computational challenges to the poultry industry. Such challenges were presented and discussed.

  3. Two-stage microbial community experimental design

    PubMed Central

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-01-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities. PMID:23949665

  4. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  5. Optimal experimental design strategies for detecting hormesis.

    PubMed

    Dette, Holger; Pepelyshev, Andrey; Wong, Weng Kee

    2011-12-01

    Hormesis is a widely observed phenomenon in many branches of life sciences, ranging from toxicology studies to agronomy, with obvious public health and risk assessment implications. We address optimal experimental design strategies for determining the presence of hormesis in a controlled environment using the recently proposed Hunt-Bowman model. We propose alternative models that have an implicit hormetic threshold, discuss their advantages over current models, and construct and study properties of optimal designs for (i) estimating model parameters, (ii) estimating the threshold dose, and (iii) testing for the presence of hormesis. We also determine maximin optimal designs that maximize the minimum of the design efficiencies when we have multiple design criteria or there is model uncertainty where we have a few plausible models of interest. We apply these optimal design strategies to a teratology study and show that the proposed designs outperform the implemented design by a wide margin for many situations.

  6. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design.

  7. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  8. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, Darrell; Curtis, Andrew

    2011-08-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms. This paper examines the influence of the NFL theorems on linearized statistical experimental design (SED). We consider four design algorithms with three different design objective functions to examine their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent to the study of transverse isotropy in many disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. We discuss differences in the performance of each design algorithm, providing a guideline for selecting design algorithms for other problems. As a by-product we demonstrate and discuss the principle of diminishing returns in SED, namely, that the value of experimental design decreases with experiment size. Another outcome of this study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting, which bypasses the computational expense of SED. This is used to define a template for optimizing survey designs, under simple assumptions, for Amplitude Variations with Azimuth and Offset (AVAZ) seismics in the specialized problem of fracture characterization, such as is of interest in the petroleum industry. Finally, we discuss the scope of our conclusions for the NFL theorems as they apply to nonlinear and Bayesian SED.

  9. Multiobjective optimization techniques for structural design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The multiobjective programming techniques are important in the design of complex structural systems whose quality depends generally on a number of different and often conflicting objective functions which cannot be combined into a single design objective. The applicability of multiobjective optimization techniques is studied with reference to simple design problems. Specifically, the parameter optimization of a cantilever beam with a tip mass and a three-degree-of-freedom vabration isolation system and the trajectory optimization of a cantilever beam are considered. The solutions of these multicriteria design problems are attempted by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It has been observed that the game theory approach required the maximum computational effort, but it yielded better optimum solutions with proper balance of the various objective functions in all the cases.

  10. Experimental design in chemistry: A tutorial.

    PubMed

    Leardi, Riccardo

    2009-10-12

    In this tutorial the main concepts and applications of experimental design in chemistry will be explained. Unfortunately, nowadays experimental design is not as known and applied as it should be, and many papers can be found in which the "optimization" of a procedure is performed one variable at a time. Goal of this paper is to show the real advantages in terms of reduced experimental effort and of increased quality of information that can be obtained if this approach is followed. To do that, three real examples will be shown. Rather than on the mathematical aspects, this paper will focus on the mental attitude required by experimental design. The readers being interested to deepen their knowledge of the mathematical and algorithmical part can find very good books and tutorials in the references [G.E.P. Box, W.G. Hunter, J.S. Hunter, Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, John Wiley & Sons, New York, 1978; R. Brereton, Chemometrics: Data Analysis for the Laboratory and Chemical Plant, John Wiley & Sons, New York, 1978; R. Carlson, J.E. Carlson, Design and Optimization in Organic Synthesis: Second Revised and Enlarged Edition, in: Data Handling in Science and Technology, vol. 24, Elsevier, Amsterdam, 2005; J.A. Cornell, Experiments with Mixtures: Designs, Models and the Analysis of Mixture Data, in: Series in Probability and Statistics, John Wiley & Sons, New York, 1991; R.E. Bruns, I.S. Scarminio, B. de Barros Neto, Statistical Design-Chemometrics, in: Data Handling in Science and Technology, vol. 25, Elsevier, Amsterdam, 2006; D.C. Montgomery, Design and Analysis of Experiments, 7th edition, John Wiley & Sons, Inc., 2009; T. Lundstedt, E. Seifert, L. Abramo, B. Thelin, A. Nyström, J. Pettersen, R. Bergman, Chemolab 42 (1998) 3; Y. Vander Heyden, LC-GC Europe 19 (9) (2006) 469].

  11. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, Layton Carter

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  12. Plant proteomics update (2007-2008): Second-generation proteomic techniques, an appropriate experimental design, and data analysis to fulfill MIAPE standards, increase plant proteome coverage and expand biological knowledge.

    PubMed

    Jorrín-Novo, Jesús V; Maldonado, Ana M; Echevarría-Zomeño, Sira; Valledor, Luis; Castillejo, Mari A; Curto, Miguel; Valero, José; Sghaier, Besma; Donoso, Gabriel; Redondo, Inmaculada

    2009-04-13

    This review is the continuation of three previously published articles [Jorrin JV, Maldonado AM, Castillejo MA. Plant proteome analysis: a 2006 update. Proteomics 2007; 7: 2947-2962; Rossignol M, Peltier JB, Mock HP, Matros A, Maldonado AM, Jorrin JV. Plant proteome analysis: a 2004-2006 update. Proteomics 2006; 6: 5529-5548; Canovas FM, Dumas-Gaudot E, Recorbet G, Jorrin J, Mock HP, Rossignol M. Plant proteome analysis. Proteomics 2004; 4: 285-298] and aims to update the contribution of Proteomics to plant research between 2007 and September 2008 by reviewing most of the papers, which number approximately 250, that appeared in the Plant Proteomics field during that period. Most of the papers published deal with the proteome of Arabidopsis thaliana and rice (Oryza sativa), and focus on profiling organs, tissues, cells or subcellular proteomes, and studying developmental processes and responses to biotic and abiotic stresses using a differential expression strategy. Although the platform based on 2-DE is still the most commonly used, the use of gel-free and second-generation Quantitative Proteomic techniques has increased. Proteomic data are beginning to be validated using complementary -omics or classical biochemical or cellular biology techniques. In addition, appropriate experimental design and statistical analysis are being carried out in accordance with the required Minimal Information about a Proteomic Experiment (MIAPE) standards. As a result, the coverage of the plant cell proteome and the plant biology knowledge is increasing. Compared to human and yeast systems, however, plant biology research has yet to exploit fully the potential of proteomics, in particular its applications to PTMs and Interactomics.

  13. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  14. Machine learning techniques and drug design.

    PubMed

    Gertrudes, J C; Maltarollo, V G; Silva, R A; Oliveira, P R; Honório, K M; da Silva, A B F

    2012-01-01

    The interest in the application of machine learning techniques (MLT) as drug design tools is growing in the last decades. The reason for this is related to the fact that the drug design is very complex and requires the use of hybrid techniques. A brief review of some MLT such as self-organizing maps, multilayer perceptron, bayesian neural networks, counter-propagation neural network and support vector machines is described in this paper. A comparison between the performance of the described methods and some classical statistical methods (such as partial least squares and multiple linear regression) shows that MLT have significant advantages. Nowadays, the number of studies in medicinal chemistry that employ these techniques has considerably increased, in particular the use of support vector machines. The state of the art and the future trends of MLT applications encompass the use of these techniques to construct more reliable QSAR models. The models obtained from MLT can be used in virtual screening studies as well as filters to develop/discovery new chemicals. An important challenge in the drug design field is the prediction of pharmacokinetic and toxicity properties, which can avoid failures in the clinical phases. Therefore, this review provides a critical point of view on the main MLT and shows their potential ability as a valuable tool in drug design.

  15. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  16. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  17. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  18. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  19. An Experimental Study for Effectiveness of Super-Learning Technique at Elementary Level in Pakistan

    ERIC Educational Resources Information Center

    Shafqat, Hussain; Muhammad, Sarwar; Imran, Yousaf; Naemullah; Inamullah

    2010-01-01

    The objective of the study was to experience the effectiveness of super-learning technique of teaching at elementary level. The study was conducted with 8th grade students at a public sector school. Pre-test and post-test control group designs were used. Experimental and control groups were formed randomly, the experimental group (N = 62),…

  20. Systematic design assessment techniques for solar buildings

    NASA Astrophysics Data System (ADS)

    Page, J. K.; Rodgers, G. G.; Souster, C. G.

    1980-02-01

    The paper describes the various approaches developed for the detailed modelling of the relevant climatic input variables for systematic design assessments for solar housing techniques. A report is made of the techniques developed to generate systematic short wave radiation data for vertical and inclined surfaces for different types of weather. The analysis is based on different types of days, such as sunny, average and overcast. Work on the accurate estimation of the magnitude of the associated weather variables affecting heat transfer in the external environment is also reported, covering air temperature, wind speed and long wave radiation exchanges.

  1. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  2. Multi-Variable Analysis and Design Techniques.

    DTIC Science & Technology

    1981-09-01

    by A.G.J.MacFarlane 2 MULTIVARIABLE DESIGN TECHNIQUES BASED ON SINGULAR VALUE GENERALIZATIONS OF CLASSICAL CONTROL by J.C. Doyle 3 LIMITATIONS ON...prototypes to complex mathematical representations. All of these assemblages of information or information generators can loosely be termed "models...non linearities (e.g., control saturation) I neglect of high frequency dynamics. T hese approximations are well understood and in general their impact

  3. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  4. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  5. Irradiation Design for an Experimental Murine Model

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárraga-Gutierrez, J. M.; García-Garduño, O. A.; Rubio-Osornio, M. C.; Custodio-Ramírez, V.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Paz, C.; Celis, M. A.

    2010-12-01

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  6. Criteria for the optimal design of experimental tests

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1972-01-01

    Some of the basic concepts are unified that were developed for the problem of finding optimal approximating functions which relate a set of controlled variables to a measurable response. The techniques have the potential for reducing the amount of testing required in experimental investigations. Specifically, two low-order polynomial models are considered as approximations to unknown functionships. For each model, optimal means of designing experimental tests are presented which, for a modest number of measurements, yield prediction equations that minimize the error of an estimated response anywhere inside a selected region of experimentation. Moreover, examples are provided for both models to illustrate their use. Finally, an analysis of a second-order prediction equation is given to illustrate ways of determining maximum or minimum responses inside the experimentation region.

  7. Teaching Experimental Design Using an Exercise in Protein Fractionation

    NASA Astrophysics Data System (ADS)

    Loke, J. P.; Hancock, D.; Johnston, J. M.; Dimauro, J.; Denyer, G. S.

    2001-11-01

    This experiment, suitable for introductory biochemistry courses, presents the techniques of protein purification as a problem-solving exercise. Students must identify and purify three proteins from an unknown mixture using the techniques of gel filtration, ion exchange chromatography, UV and visible spectrophotometry, and gel electrophoresis. To aid construction of a strategy, they are given some information about each of the possible proteins: source, function, molecular weight, pI, and UV and visible spectra. From this they must design their own purification protocols and carry out the experimental work. To develop students' computer skills, the experimental results and the logic used in the identification are presented as a short computer-generated report.

  8. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our

  9. Set membership experimental design for biological systems.

    PubMed

    Marvel, Skylar W; Williams, Cranos M

    2012-03-21

    Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. The practicability of our approach is illustrated with a case study. This study shows that our approach is able to 1) identify

  10. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  11. Aeroshell Design Techniques for Aerocapture Entry Vehicles

    NASA Technical Reports Server (NTRS)

    Dyke, R. Eric; Hrinda, Glenn A.

    2004-01-01

    A major goal of NASA s In-Space Propulsion Program is to shorten trip times for scientific planetary missions. To meet this challenge arrival speeds will increase, requiring significant braking for orbit insertion, and thus increased deceleration propellant mass that may exceed launch lift capabilities. A technology called aerocapture has been developed to expand the mission potential of exploratory probes destined for planets with suitable atmospheres. Aerocapture inserts a probe into planetary orbit via a single pass through the atmosphere using the probe s aeroshell drag to reduce velocity. The benefit of an aerocapture maneuver is a large reduction in propellant mass that may result in smaller, less costly missions and reduced mission cruise times. The methodology used to design rigid aerocapture aeroshells will be presented with an emphasis on a new systems tool under development. Current methods for fast, efficient evaluations of structural systems for exploratory vehicles to planets and moons within our solar system have been under development within NASA having limited success. Many systems tools that have been attempted applied structural mass estimation techniques based on historical data and curve fitting techniques that are difficult and cumbersome to apply to new vehicle concepts and missions. The resulting vehicle aeroshell mass may be incorrectly estimated or have high margins included to account for uncertainty. This new tool will reduce the guesswork previously found in conceptual aeroshell mass estimations.

  12. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2012-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each configuration is associated to one value of the objective function that characterizes the quality of this particular design. Here, we describe the method used to optimize an experimental design. Then, we validate this new technique and explore the different issues of experimental design by simulating a CSEM survey with a realistic 1D layered model.

  13. Automatic Molecular Design using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

  14. Natural Stream Channel Design Techniques and Review

    EPA Pesticide Factsheets

    Need for a Review Checklist: Stream restoration problems include; design complexity, many different design methodologies, inconsistency in design deliverables, communication difficulties, many failed projects

  15. Comparison of experimental and computational techniques for plane mixing layers

    NASA Technical Reports Server (NTRS)

    Mehta, R. D.; Bell, J. H.; Inoue, O.; King, L. S.

    1987-01-01

    In this paper, results from two experimental and two computational investigations of plane turbulent mixing layers are presented and compared. The experimental results include flow visualization data using the smoke laser technique and mean flow and turbulence measurements obtained with hot X-wires and a two-component LDV. Reasonably good agreement is found among these techniques, at least for turbulence quantities up to second order. Reynolds-averaged computations are successful at capturing the complete evolution of the mixing layer, including wake effects in the near field and approach to self-preservation in the far field. The two-dimensional vortex method shows excellent qualitative and quantitative agreement with measured data for the forced mixing layer. For the unforced layer, the results seem to indicate that a three-dimensional computation may be necessary.

  16. Experimental demonstration of NG-PONs power budget enhancement techniques

    NASA Astrophysics Data System (ADS)

    Emsia, Ali; Malekizandi, Mohammadreza; Briggmann, Dieter; Le, Quang T.; Djordjevic, Ivan B.; Küppers, Franko

    2013-12-01

    This paper experimentally investigates power budget extension configurations for WDM NG-PONs. Differential Phase Shift Keying (DPSK), and Differential Quadrature Phase Shift Keying (DQPSK) are considered. The budget enhancement techniques are based on Semiconductor Optical Amplifier (SOA). The paper thoroughly studies power budget enhancement for the two modulation formats and shows that the proposed configurations comply with current standards such as XG-PON1.

  17. Techniques for Reducing Gun Blast Noise Levels: An Experimental Study

    DTIC Science & Technology

    1981-04-01

    gun muzzle blast noise level were in- vestigated experimentally to determine potential effectiveness and utility for existing major-caliber guns...impact on training and testing operations was to be minimized. Most of the noise reduction techniques that were investigated involve the use of some type ...shock noise level at the earth’s surface varies according to a complicated dependence upon projectile trajectory, projectile speed along the trajectory

  18. EXPERIMENTAL DESIGN IN CLINICAL 'OMICS BIOMARKER DISCOVERY.

    PubMed

    Forshed, Jenny

    2017-10-02

    This tutorial highlights some issues on experimental design in clinical 'omics biomarker discovery, how to avoid bias and get as true quantities as possible from biochemical analyses and how to select samples to improve the chances to answer the clinical question at issue. This includes the importance of defining clinical aim and endpoint, about knowing the variability in the results, randomization of samples, sample size, statistical power and how to avoid confounding factors by including clinical data in the sample selection, i.e. how to avoid unpleasant surprises at the point of statistical analysis. The aim of this tutorial is to help out in translational clinical and pre-clinical biomarker candidate research, to improve the validity and potential of future biomarker candidate findings.

  19. Nonlinear potential analysis techniques for supersonic-hypersonic aerodynamic design

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes.

  20. R/qtlDesign: inbred line cross experimental design

    PubMed Central

    Sen, Śaunak; Satagopan, Jaya M.; Broman, Karl W.; Churchill, Gary A.

    2008-01-01

    An investigator planning a QTL (quantitative trait locus) experiment has to choose which strains to cross, the type of cross, genotyping strategies, and the number of progeny to raise and phenotype. To help make such choices, we have developed an interactive program for power and sample size calculations for QTL experiments, R/qtlDesign. Our software includes support for selective genotyping strategies, variable marker spacing, and tools to optimize information content subject to cost constraints for backcross, intercross, and recombinant inbred lines from two parental strains. We review the impact of experimental design choices on the variance attributable to a segregating locus, the residual error variance, and the effective sample size. We give examples of software usage in real-life settings. The software is available at http://www.biostat.ucsf.edu/sen/software.html. PMID:17347894

  1. Experimental techniques for the investigation of coupled phenomena in geomaterials

    NASA Astrophysics Data System (ADS)

    Romero, E.

    2010-06-01

    The paper describes different experimental setups and techniques used to investigate coupled stress, fluid (water and air) and temperature effects on geomaterials. Two temperature controlled cells are described: a) a constant volume cell in which thermal pulses can be performed under controlled hydraulic conditions to induce pore pressure build-up during quasi-undrained heating and later dissipation; and b) an axisymmetric triaxial cell with controlled suction and temperature to perform drained heating and cooling paths under partially saturated states. The paper also presents an experimental setup to perform controlled flow-rate gas injection experiments on argillaceous rocks using a high-pressure triaxial cell. This cell is used to study gas migration phenomena and the conditions under which gas breakthrough processes occur. Selected test results are presented, which show the capabilities of the different experimental setups described to capture main behavioural features.

  2. Fourier transform approach in modulation technique of experimental measurements.

    PubMed

    Khazimullin, M V; Lebedev, Yu A

    2010-04-01

    An application of Fourier transform approach in modulation technique of experimental studies is considered. This method has obvious advantages compared with traditional lock-in amplifiers technique--simple experimental setup, a quickly available information on all the required harmonics, high speed of data processing using fast Fourier transform algorithm. A computationally simple, fast and accurate Fourier coefficients interpolation (FCI) method has been implemented to obtain a useful information from harmonics of a multimode signal. Our analysis shows that in this case FCI method has a systematical error (bias) of a signal parameters estimation, which became essential for the short data sets. Hence, a new differential Fourier coefficients interpolation (DFCI) method has been suggested, which is less sensitive to a presence of several modes in a signal. The analysis has been confirmed by simulations and measurements of a quartz wedge birefringence by means of the photoelastic modulator. The obtained bias, noise level, and measuring speed are comparable and even better than in lock-in amplifier technique. Moreover, presented DFCI method is expected to be promised candidate for using in actively developing imaging systems based on the modulation technique requiring fast digital signal processing of large data sets.

  3. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  4. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  5. Formulation of aerodynamic prediction techniques for hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    1979-01-01

    An investigation of approximate theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds was performed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Supersonic second order potential theory was examined in detail to meet this objective. Shock layer integral techniques were considered as an alternative means of predicting gross aerodynamic characteristics. Several numerical pilot codes were developed for simple three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the second order computations indicated good agreement with higher order solutions and experimental results for a variety of wing like shapes and values of the hypersonic similarity parameter M delta approaching one.

  6. Experimental Design for Combinatorial and High Throughput Materials Development

    NASA Astrophysics Data System (ADS)

    Cawse, James N.

    2002-12-01

    In the past decade, combinatorial and high throughput experimental methods have revolutionized the pharmaceutical industry, allowing researchers to conduct more experiments in a week than was previously possible in a year. Now high throughput experimentation is rapidly spreading from its origins in the pharmaceutical world to larger industrial research establishments such as GE and DuPont, and even to smaller companies and universities. Consequently, researchers need to know the kinds of problems, desired outcomes, and appropriate patterns for these new strategies. Editor James Cawse's far-reaching study identifies and applies, with specific examples, these important new principles and techniques. Experimental Design for Combinatorial and High Throughput Materials Development progresses from methods that are now standard, such as gradient arrays, to mathematical developments that are breaking new ground. The former will be particularly useful to researchers entering the field, while the latter should inspire and challenge advanced practitioners. The book's contents are contributed by leading researchers in their respective fields. Chapters include: -High Throughput Synthetic Approaches for the Investigation of Inorganic Phase Space -Combinatorial Mapping of Polymer Blends Phase Behavior -Split-Plot Designs -Artificial Neural Networks in Catalyst Development -The Monte Carlo Approach to Library Design and Redesign This book also contains over 200 useful charts and drawings. Industrial chemists, chemical engineers, materials scientists, and physicists working in combinatorial and high throughput chemistry will find James Cawse's study to be an invaluable resource.

  7. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  8. Plant monitoring techniques and second generation designs

    SciTech Connect

    Kindle, C.H.; Shannon, D.W.; Robertus, R.J.; Pierce, D.D.; Sullivan, R.G.

    1985-03-01

    Chemical and instrumental monitoring techniques suitable for geothermal use are described in a manner to relate them to plant operational problems and downtime avoidance. The use of these techniques permits the detection of scaling, the onset of scaling, corrosion loss, current corrosion rates and incipient heat exchanger failure. Conceptual advances are noted which simplify the research techniques to approaches that should be usable even in some low-capital well-head type power plants. 10 refs., 8 figs.

  9. Plant metabolomics: from experimental design to knowledge extraction.

    PubMed

    Rai, Amit; Umashankar, Shivshankar; Swarup, Sanjay

    2013-01-01

    Metabolomics is one of the most recent additions to the functional genomics approaches. It involves the use of analytical chemistry techniques to provide high-density data of metabolic profiles. Data is then analyzed using advanced statistics and databases to extract biological information, thus providing the metabolic phenotype of an organism. Large variety of metabolites produced by plants through the complex metabolic networks and their dynamic changes in response to various perturbations can be studied using metabolomics. Here, we describe the basic features of plant metabolic diversity and analytical methods to describe this diversity, which includes experimental workflows starting from experimental design, sample preparation, hardware and software choices, combined with knowledge extraction methods. Finally, we describe a scenario for using these workflows to identify differential metabolites and their pathways from complex biological samples.

  10. Comparative experimental study of argon plasma and bipolar coagulation techniques.

    PubMed

    Riegel, T; Tirakotai, W; Mennel, H D; Hellwig, D; Sure, U; Bertalanffy, H; Celik, I

    2006-07-01

    Argon plasma coagulation (APC) is based on the principle of ionised argon creating conductive plasma between an activating electrode and tissue surface and is used as an effective alternative coagulation technique in various surgical disciplines. This trial aims to compare thermal injury in rat brain caused by APC and conventional bipolar coagulation technique. A controlled study design with constant power setting and application time was established. Twenty rats were randomised into the APC and bipolar groups. Each group of ten rats had 20 treated lesions. Early and late histopathological changes, as well as maximum extent of the lesion after 48 hours (h) and 12 days were studied in overall 20 lesions. Although the maximum depth of the lesions was different in APC (2.2 mm) and bipolar (1.8 mm) groups after 48 h, this did not achieve statistical significance (p=0.151). The superficially coagulated area was significantly larger after APC compared with the bipolar technique at the 48 h time point (p=0.032). After twelve days there were no differences in penetration depth (p=0.310) or coagulated area (p=0.222). Tissue defects after APC application on rat brains were comparable to conventional bipolar technique in this trial. The results suggest that argon plasma coagulation (APC) is an effective coagulation technique.

  11. The suitability of selected multidisciplinary design and optimization techniques to conceptual aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1992-01-01

    Four methods for preliminary aerospace vehicle design are reviewed. The first three methods (classical optimization, system decomposition, and system sensitivity analysis (SSA)) employ numerical optimization techniques and numerical gradients to feed back changes in the design variables. The optimum solution is determined by stepping through a series of designs toward a final solution. Of these three, SSA is argued to be the most applicable to a large-scale highly coupled vehicle design where an accurate minimum of an objective function is required. With SSA, several tasks can be performed in parallel. The techniques of classical optimization and decomposition can be included in SSA, resulting in a very powerful design method. The Taguchi method is more of a 'smart' parametric design method that analyzes variable trends and interactions over designer specified ranges with a minimum of experimental analysis runs. Its advantages are its relative ease of use, ability to handle discrete variables, and ability to characterize the entire design space with a minimum of analysis runs.

  12. A Short Guide to Experimental Design and Analysis for Engineers

    DTIC Science & Technology

    2014-04-01

    measures and the single group design. The relevant statistical techniques are also discussed to help identify key quantitative methods for data...authors gain a basic understanding of design, measurement and statistical analysis to support military experiments. RELEASE LIMITATION Approved...designs including the simple experiment, matched-pairs, repeated- measures and the single group design. The relevant statistical techniques are also

  13. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  14. Comparison of the experimental aerodynamic characteristics of theoretically and experimentally designed supercritical airfoils

    NASA Technical Reports Server (NTRS)

    Harris, C. D.

    1974-01-01

    A lifting airfoil theoretically designed for shockless supercritical flow utilizing a complex hodograph method has been evaluated in the Langley 8-foot transonic pressure tunnel at design and off-design conditions. The experimental results are presented and compared with those of an experimentally designed supercritical airfoil which were obtained in the same tunnel.

  15. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  16. Implementation of high throughput experimentation techniques for kinetic reaction testing.

    PubMed

    Nagy, Anton J

    2012-02-01

    Successful implementation of High throughput Experimentation (EE) tools has resulted in their increased acceptance as essential tools in chemical, petrochemical and polymer R&D laboratories. This article provides a number of concrete examples of EE systems, which have been designed and successfully implemented in studies, which focus on deriving reaction kinetic data. The implementation of high throughput EE tools for performing kinetic studies of both catalytic and non-catalytic systems results in a significantly faster acquisition of high-quality kinetic modeling data, required to quantitatively predict the behavior of complex, multistep reactions.

  17. Circular machine design techniques and tools

    SciTech Connect

    Servranckx, R.V.; Brown, K.L.

    1986-04-01

    Some of the basic optics principles involved in the design of circular accelerators such as Alternating Gradient Synchrotrons, Storage and Collision Rings, and Pulse Stretcher Rings are outlined. Typical problems facing a designer are defined, and the main references and computational tools are reviewed that are presently available. Two particular classes of problems that occur typically in accelerator design are listed - global value problems, which affect the control of parameters which are characteristic of the complete closed circular machine, and local value problems. Basic mathematical formulae are given that are considered useful for a first draft of a design. The basic optics building blocks that can be used to formulate an initial machine design are introduced, giving only the elementary properties and transfer matrices only in one transverse plane. Solutions are presented for some first-order and second-order design problems. (LEW)

  18. End-point controller design for an experimental two-link flexible manipulator using convex optimization

    NASA Technical Reports Server (NTRS)

    Oakley, Celia M.; Barratt, Craig H.

    1990-01-01

    Recent results in linear controller design are used to design an end-point controller for an experimental two-link flexible manipulator. A nominal 14-state linear-quadratic-Gaussian (LQG) controller was augmented with a 528-tap finite-impulse-response (FIR) filter designed using convex optimization techniques. The resulting 278-state controller produced improved end-point trajectory tracking and disturbance rejection in simulation and experimentally in real time.

  19. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  20. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  1. Improvement on fuzzy controller design techniques

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1993-01-01

    This paper addresses three main issues, which are somewhat interrelated. The first issue deals with the classification or types of fuzzy controllers. Careful examination of the fuzzy controllers designed by various engineers reveals distinctive classes of fuzzy controllers. Classification is believed to be helpful from different perspectives. The second issue deals with the design according to specifications, experiments related to the tuning of fuzzy controllers, according to the specification, will be discussed. General design procedure, hopefully, can be outlined in order to ease the burden of a design engineer. The third issue deals with the simplicity and limitation of the rule-based IF-THEN logical statements. The methodology of fuzzy-constraint network is proposed here as an alternative to the design practice at present. It is our belief that predicate calculus and the first order logic possess much more expressive power.

  2. Evaluating abdominal oedema during experimental sepsis using an isotope technique.

    PubMed

    Lattuada, Marco; Maripuu, Enn; Segerstad, Carl Hard af; Lundqvist, Hans; Hedenstierna, Göran

    2012-05-01

    Abdominal oedema is common in sepsis. A technique for the study of such oedema may guide in the fluid regime of these patients. We modified a double-isotope technique to evaluate abdominal organ oedema and fluid extravasation in 24 healthy or endotoxin-exposed ('septic') piglets. Two different markers were used: red blood cells (RBC) labelled with Technetium-99m ((99m)Tc) and Transferrin labelled with Indium111 ((111)In). Images were acquired on a dual-head gamma camera. Microscopic evaluation of tissue biopsies was performed to compare data with the isotope technique. No (99m)Tc activity was measured in the plasma fraction in blood sampled after labelling. Similarly, after molecular size gel chromatography, (111)In activity was exclusively found in the high molecular fraction of the plasma. Extravasation of transferrin, indicating the degree of abdominal oedema, was 4·06 times higher in the LPS group compared to the healthy controls (P<0·0001). Abdominal free fluid, studied in 3 animals, had as high (111)In activity as in plasma, but no (99m)Tc activity. Intestinal lymphatic vessel size was higher in LPS (3·7 ± 1·1 μm) compared to control animals (0·6 + 0·2 μm; P<0·001) and oedema correlated to villus diameter (R(2) = 0·918) and lymphatic diameter (R(2) = 0·758). A correlation between a normalized index of oedema formation (NI) and intra-abdominal pressure (IAP) was also found: NI = 0·46*IAP-3·3 (R(2) = 0·56). The technique enables almost continuous recording of abdominal oedema formation and may be a valuable tool in experimental research, with the potential to be applied in the clinic. © 2011 The Authors. Clinical Physiology and Functional Imaging © 2011 Scandinavian Society of Clinical Physiology and Nuclear Medicine.

  3. U-stitching splenorraphy technique: experimental and clinical study.

    PubMed

    Tsaroucha, Alexandra K; Pitiakoudis, Michail S; Chanos, Georgios; Chiotis, Anestis S; Argyropoulou, Paraskevi I; Prassopoulos, Panos; Simopoulos, Constantinos E

    2005-04-01

    The aim of the present study was to describe the laboratory development and the subsequent clinical utility of the U-stitching technique for splenorraphy over the recent years in a general non-trauma hospital. Patients with splenectomies and patients treated conservatively during the same time period, are also presented. In the 15-year period from September 1988 until September 2003, 65 patients were diagnosed with splenic injury following admission to 2nd Department of Surgery, Democritus University Hospital, after blunt abdominal trauma. During the first 3 years, 14 patients were admitted; one of them was treated conservatively and 13 had splenectomies. Because computed tomography (CT) was not available at that time, these 14 patients form a control group. During the remaining 12-year period, 51 patients (39 male and 12 female; age, 4-82 years; mean, 31.1 years; SD, 19.7 years) were treated conservatively or surgically, either with splenectomy or with splenorraphy. Splenorraphy was performed using the U-stitching technique. This alternative splenorraphy technique was first tested on experimental models at 2nd Department of Surgery, Democritus University Hospital, then followed by successful clinical application. The medical records for these patients were reviewed to extract the data for the present study. Thirty-six patients (70.6% of 51 patients) were treated surgically; of these, 21 (41.2% of 51 patients) had splenectomy and 15 (29.4% of 51 patients) had splenorraphy. Non-operative treatment was initially given to 15 patients (29.4% of 51 patients). Two of them had delayed rupture of the spleen and underwent splenectomy (at 8 and 40 days). The total number of preserved spleens was 28 of 51 (54.9%). None of the patients with conservative treatment or splenorraphy died. One patient with splenectomy died later from overwhelming sepsis. Splenic salvage is now a treatment goal. If the patient is haemodynamically unstable and splenorraphy is possible, the U

  4. Advanced experimental techniques for transonic wind tunnels - Final lecture

    NASA Technical Reports Server (NTRS)

    Kilgore, Robert A.

    1987-01-01

    A philosophy of experimental techniques is presented, suggesting that in order to be successful, one should like what one does, have the right tools, stick to the job, avoid diversions, work hard, interact with people, be informed, keep it simple, be self sufficient, and strive for perfection. Sources of information, such as bibliographies, newsletters, technical reports, and technical contacts and meetings are recommended. It is pointed out that adaptive-wall test sections eliminate or reduce wall interference effects, and magnetic suspension and balance systems eliminate support-interference effects, while the problem of flow quality remains with all wind tunnels. It is predicted that in the future it will be possible to obtain wind tunnel results at the proper Reynolds number, and the effects of flow unsteadiness, wall interference, and support interference will be eliminated or greatly reduced.

  5. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  6. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  7. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  8. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  9. Applications of Advanced Experimental Methods to Visual Technology Research Simulator Studies: Supplemental Techniques.

    DTIC Science & Technology

    1981-01-01

    TECHNIQUES C. Wd. Simon Canyon Research Group, Inc. 741 Lakefilild Road, Suite B Westlakq*Village, California 91361 CI* FINAL REPORT MAY 1978 -MARCH 1980...AUGMENTATION b . USING YATES’ ALGORITHM WITH SCRFENING DESIGNS c. ANALYZING RESIDUALS 9 NAVTRAEQUIPCEN 78-C-0060-3 d. IDENTIFYING THE EXPERIMENTAL CONDITIONS...misinterpretations. Items a-iii, b , c, and f deal with topics related to the analysis of- such data. In some cases, proper data analysis may be substituted

  10. Nonlinear potential analysis techniques for supersonic-hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    Clever, W. C.; Shankar, V.

    1983-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical pilot codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one. Case computational times of a minute were achieved for practical aircraft arrangements.

  11. Biofilms and mechanics: a review of experimental techniques and findings

    NASA Astrophysics Data System (ADS)

    Gordon, Vernita D.; Davis-Fields, Megan; Kovach, Kristin; Rodesney, Christopher A.

    2017-06-01

    Biofilms are developmentally-dynamic communities of sessile microbes that adhere to each other and, often, to other structures in their environment. The cohesive mechanical forces binding microbes to each other confer mechanical and structural stability on the biofilm and give rise to biofilm viscoelasticity. The adhesive mechanical forces binding microbes to other structures in their environment can promote biofilm initiation and mechanosensing that leads to changes in biological activity. Thus, physical mechanics is intrinsic to characteristics that distinguish biofilms from free-swimming or free-floating microbes in liquid culture. However, very little is known about the specifics of what mechanical traits characterize different types of biofilms at different stages of development. Even less is known about how mechanical inputs impact microbial biology and how microbes can adjust their mechanical coupling to, and interaction with, their environment. These knowledge gaps arise, in part, from the challenges associated with experimental measurements of microbial and biofilm biomechanics. Here, we review extant experimental techniques and their most-salient findings to date. At the end of this review we indicate areas where significant advances in the state-of-the art are heading.

  12. A Novel Experimental Technique to Simulate Pillar Burst in Laboratory

    NASA Astrophysics Data System (ADS)

    He, M. C.; Zhao, F.; Cai, M.; Du, S.

    2015-09-01

    Pillar burst is one type of rockburst that occurs in underground mines. Simulating the stress change and obtaining insight into the pillar burst phenomenon under laboratory conditions are essential for studying the rock behavior during pillar burst in situ. To study the failure mechanism, a novel experimental technique was proposed and a series of tests were conducted on some granite specimens using a true-triaxial strainburst test system. Acoustic emission (AE) sensors were used to monitor the rock fracturing process. The damage evolution process was investigated using techniques such as macro and micro fracture characteristics observation, AE energy evolution, and b value analysis and fractal dimension analysis of cracks on fragments. The obtained results indicate that stepped loading and unloading simulated the pillar burst phenomenon well. Four deformation stages are divided as initial stress state, unloading step I, unloading step II, and final burst. It is observed that AE energy has a sharp increase at the initial stress state, accumulates slowly at unloading steps I and II, and increases dramatically at peak stress. Meanwhile, the mean b values fluctuate around 3.50 for the first three deformation stages and then decrease to 2.86 at the final stage, indicating the generation of a large amount of macro fractures. Before the test, the fractal dimension values are discrete and mainly vary between 1.10 and 1.25, whereas after failure the values concentrate around 1.25-1.35.

  13. Damage tolerant design using collapse techniques

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1982-01-01

    A new approach to the design of structures for improved global damage tolerance is presented. In its undamaged condition the structure is designed subject to strength, displacement and buckling constraints. In the damaged condition the only constraint is that the structure will not collapse. The collapse load calculation is formulated as a maximization problem and solved by an interior extended penalty function. The design for minimum weight subject to constraints on the undamaged structure and a specified level of the collapse load is a minimization problem which is also solved by a penalty function formulation. Thus the overall problem is of a nested or multilevel optimization. Examples are presented to demonstrate the difference between the present and more traditional approaches.

  14. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Yudilevitch, Gil; Levine, William S.

    1994-01-01

    Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.

  15. Evolutionary Technique for Designing Optimized Arrays

    NASA Astrophysics Data System (ADS)

    Villazón, J.; Ibañez, A.

    2011-06-01

    Many ultrasonic inspection applications in the industry could benefit from the use of phased array distributions specifically designed for them. Some common design requirements are: to adapt the shape of the array to that of the part to be inspected, to use large apertures for increasing lateral resolution, to find a layout of elements that avoids artifacts produced by lateral and/or grating lobes, to maintain the total number of independent elements (and the number of control channels) as low as possible to reduce complexity and cost of the inspection system. Recent advances in transducer technology have made possible to design and build arrays whit non-regular layout of elements. In this paper we propose to use Evolutionary Algorithms to find layouts of ultrasonic arrays (whether 1D or 2D array) that approach a set of specified beampattern characteristics using a low number of elements.

  16. EBTS:DESIGN AND EXPERIMENTAL STUDY.

    SciTech Connect

    PIKIN,A.; ALESSI,J.; BEEBE,E.; KPONOU,A.; PRELEC,K.; KUZNETSOV,G.; TIUNOV,M.

    2000-11-06

    Experimental study of the BNL Electron Beam Test Stand (EBTS), which is a prototype of the Relativistic Heavy Ion Collider (RHIC) Electron Beam Ion Source (EBIS), is currently underway. The basic physics and engineering aspects of a high current EBIS implemented in EBTS are outlined and construction of its main systems is presented. Efficient transmission of a 10 A electron beam through the ion trap has been achieved. Experimental results on generation of multiply charged ions with both continuous gas and external ion injection confirm stable operation of the ion trap.

  17. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  18. Skin microbiome surveys are strongly influenced by experimental design

    PubMed Central

    Meisel, Jacquelyn S.; Hannigan, Geoffrey D.; Tyldsley, Amanda S.; SanMiguel, Adam J.; Hodkinson, Brendan P.; Zheng, Qi; Grice, Elizabeth A.

    2016-01-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provide more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e. gastrointestinal), and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource- and cost-intensive, provides evidence of a community’s functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This work highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  19. Techniques for molecular imaging probe design.

    PubMed

    Reynolds, Fred; Kelly, Kimberly A

    2011-12-01

    Molecular imaging allows clinicians to visualize disease-specific molecules, thereby providing relevant information in the diagnosis and treatment of patients. With advances in genomics and proteomics and underlying mechanisms of disease pathology, the number of targets identified has significantly outpaced the number of developed molecular imaging probes. There has been a concerted effort to bridge this gap with multidisciplinary efforts in chemistry, proteomics, physics, material science, and biology--all essential to progress in molecular imaging probe development. In this review, we discuss target selection, screening techniques, and probe optimization with the aim of developing clinically relevant molecularly targeted imaging agents.

  20. Techniques for Molecular Imaging Probe Design

    PubMed Central

    Reynolds, Fred; Kelly, Kimberly A.

    2011-01-01

    Molecular imaging allows clinicians to visualize disease specific molecules, thereby providing relevant information in the diagnosis and treatment of patients. With advances in genomics and proteomics and underlying mechanisms of disease pathology, the number of targets identified has significantly outpaced the number of developed molecular imaging probes. There has been a concerted effort to bridge this gap with multidisciplinary efforts in chemistry, proteomics, physics, material science, and biology; all essential to progress in molecular imaging probe development. In this review, we will discuss target selection, screening techniques and probe optimization with the aim of developing clinically relevant molecularly targeted imaging agents. PMID:22201532

  1. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  2. Overview of Passive Solar Design Techniques.

    DTIC Science & Technology

    1982-09-01

    the "market acceptance" of the passive solar designs. In mast cases, a passive system is integrated into the architecture of a building, which...increases discomfort by decreasing the rate of moisture evaporation from the skin. The Bioclimatic Chart developed by V. Olgyay provides a convenient way...outdoors and, therefore, not previously cir- culated through the system. passive solar system: An assembly of natural and architectural components

  3. Advanced Computational Techniques for Power Tube Design.

    DTIC Science & Technology

    1986-07-01

    fixturing applications, in addition to the existing computer-aided engineering capabilities. o Helix TWT Manufacturing has Implemented a tooling and fixturing...illustrates the ajor features of this computer network. ) The backbone of our system is a Sytek Broadband Network (LAN) which Interconnects terminals and...automatic network analyzer (FANA) which electrically characterizes the slow-wave helices of traveling-wave tubes ( TWTs ) -- both for engineering design

  4. Experimental measurements of the thermal conductivity of ash deposits: Part 1. Measurement technique

    SciTech Connect

    A. L. Robinson; S. G. Buckley; N. Yang; L. L. Baxter

    2000-04-01

    This paper describes a technique developed to make in situ, time-resolved measurements of the effective thermal conductivity of ash deposits formed under conditions that closely replicate those found in the convective pass of a commercial boiler. Since ash deposit thermal conductivity is thought to be strongly dependent on deposit microstructure, the technique is designed to minimize the disturbance of the natural deposit microstructure. Traditional techniques for measuring deposit thermal conductivity generally do not preserve the sample microstructure. Experiments are described that demonstrate the technique, quantify experimental uncertainty, and determine the thermal conductivity of highly porous, unsintered deposits. The average measured conductivity of loose, unsintered deposits is 0.14 {+-} 0.03 W/(m K), approximately midway between rational theoretical limits for deposit thermal conductivity.

  5. Power Analysis Tutorial for Experimental Design Software

    DTIC Science & Technology

    2014-11-01

    all of the included two-factor interaction terms to, If Possible (requires left mouse click under Estimability heading). • Once the design is built...dialog box to the left of Custom Design. Left mouse click on the triangle, choose Advanced Options, Set Delta for Power. It asks to enter the delta...planning process is complete and all that is left is to construct a test run matrix. More specifically, the guide assumes that: 3-6 • Continuous

  6. A new acceleration technique for the design of fibre gratings.

    PubMed

    Carvalho, J C C; Sousa, M J; Sales Júnior, C S; Costa, J C W A; Francês, C R L; Segatto, M E V

    2006-10-30

    In this paper we propose a novel acceleration technique for the design of fibre gratings based on Genetic Algorithm (GA). It is shown that with an appropriate reformulation of the wavelength sampling scheme it is possible to design high quality optical filters with low computational effort. Our results will show that the proposed technique can reduce significantly the GA's processing time.

  7. Systematic Experimental Designs For Mixed-species Plantings

    Treesearch

    Jeffery C. Goelz

    2001-01-01

    Systematic experimental designs provide splendid demonstration areas for scientists and land managers to observe the effects of a gradient of species composition. Systematic designs are based on large plots where species composition varies gradually. Systematic designs save considerable space and require many fewer seedlings than conventional mixture designs. One basic...

  8. Verification of Experimental Techniques for Flow Surface Determination

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.

    1996-01-01

    The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).

  9. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  10. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  11. Teaching Experimental Design to Elementary School Pupils in Greece

    ERIC Educational Resources Information Center

    Karampelas, Konstantinos

    2016-01-01

    This research is a study about the possibility to promote experimental design skills to elementary school pupils. Experimental design and the experiment process are foundational elements in current approaches to Science Teaching, as they provide learners with profound understanding about knowledge construction and science inquiry. The research was…

  12. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  13. Autism genetics: Methodological issues and experimental design.

    PubMed

    Sacco, Roberto; Lintas, Carla; Persico, Antonio M

    2015-10-01

    Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.

  14. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  15. Advanced alloy design technique: High temperature cobalt base superalloy

    NASA Technical Reports Server (NTRS)

    Dreshfield, R. L.; Freche, J. C.; Sandrock, G. D.

    1972-01-01

    Advanced alloy design technique was developed for treating alloys that will have extended life in service at high temperature and intermediate temperatures. Process stabilizes microstructure of the alloy by designing it so that compound identified with embrittlement is eliminated or minimized. Design process is being used to develop both nickel and cobalt-base superalloys.

  16. A Robust Adaptive Autonomous Approach to Optimal Experimental Design

    NASA Astrophysics Data System (ADS)

    Gu, Hairong

    Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is

  17. Experimental design for functional MRI of scene memory encoding.

    PubMed

    Narayan, Veena M; Kimberg, Daniel Y; Tang, Kathy Z; Detre, John A

    2005-03-01

    The use of functional imaging to identify encoding-related areas in the medial temporal lobe has previously been explored for presurgical evaluation in patients with temporal lobe epilepsy. Optimizing sensitivity in such paradigms is critical for the reliable detection of regions most closely engaged in memory encoding. A variety of experimental designs have been used to detect encoding-related activity, including blocked, sparse event-related, and rapid event-related designs. Although blocked designs are generally more sensitive than event-related designs, design and analysis advantages could potentially overcome this difference. In the present study, we directly contrast different experimental designs in terms of the intensity, extent, and lateralization of activation detected in healthy subjects. Our results suggest that although improved design augments the sensitivity of event-related designs, these benefits are not sufficient to overcome the sensitivity advantages of traditional blocked designs.

  18. Optimal experimental design for diffusion kurtosis imaging.

    PubMed

    Poot, Dirk H J; den Dekker, Arnold J; Achten, Eric; Verhoye, Marleen; Sijbers, Jan

    2010-03-01

    Diffusion kurtosis imaging (DKI) is a new magnetic resonance imaging (MRI) model that describes the non-Gaussian diffusion behavior in tissues. It has recently been shown that DKI parameters, such as the radial or axial kurtosis, are more sensitive to brain physiology changes than the well-known diffusion tensor imaging (DTI) parameters in several white and gray matter structures. In order to estimate either DTI or DKI parameters with maximum precision, the diffusion weighting gradient settings that are applied during the acquisition need to be optimized. Indeed, it has been shown previously that optimizing the set of diffusion weighting gradient settings can have a significant effect on the precision with which DTI parameters can be estimated. In this paper, we focus on the optimization of DKI gradients settings. Commonly, DKI data are acquired using a standard set of diffusion weighting gradients with fixed directions and with regularly spaced gradient strengths. In this paper, we show that such gradient settings are suboptimal with respect to the precision with which DKI parameters can be estimated. Furthermore, the gradient directions and the strengths of the diffusion-weighted MR images are optimized by minimizing the Cramér-Rao lower bound of DKI parameters. The impact of the optimized gradient settings is evaluated, both on simulated as well as experimentally recorded datasets. It is shown that the precision with which the kurtosis parameters can be estimated, increases substantially by optimizing the gradient settings.

  19. Classical controller design techniques for fractional order case.

    PubMed

    Yeroglu, Celaleddin; Tan, Nusret

    2011-07-01

    This paper presents some classical controller design techniques for the fractional order case. New robust lag, lag-lead, PI controller design methods for control systems with a fractional order interval transfer function (FOITF) are proposed using classical design methods with the Bode envelopes of the FOITF. These controllers satisfy the robust performance specifications of the fractional order interval plant. In order to design a classical PID controller, an optimization technique based on fractional order reference model is used. PID controller parameters are obtained using the least squares optimization method. Different PID controller parameters that satisfy stability have been obtained for the same plant.

  20. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  1. Experimental investigation of design parameters on dry powder inhaler performance.

    PubMed

    Ngoc, Nguyen Thi Quynh; Chang, Lusi; Jia, Xinli; Lau, Raymond

    2013-11-30

    The study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall.

  2. Hierarchical aggregation for information visualization: overview, techniques, and design guidelines.

    PubMed

    Elmqvist, Niklas; Fekete, Jean-Daniel

    2010-01-01

    We present a model for building, visualizing, and interacting with multiscale representations of information visualization techniques using hierarchical aggregation. The motivation for this work is to make visual representations more visually scalable and less cluttered. The model allows for augmenting existing techniques with multiscale functionality, as well as for designing new visualization and interaction techniques that conform to this new class of visual representations. We give some examples of how to use the model for standard information visualization techniques such as scatterplots, parallel coordinates, and node-link diagrams, and discuss existing techniques that are based on hierarchical aggregation. This yields a set of design guidelines for aggregated visualizations. We also present a basic vocabulary of interaction techniques suitable for navigating these multiscale visualizations.

  3. An investigation of radiometer design using digital processing techniques

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.

    1981-01-01

    The use of digital signal processing techniques in Dicke switching radiometer design was investigated. The general approach was to develop an analytical model of the existing analog radiometer and identify factors which adversly affect its performance. A digital processor was then proposed to verify the feasibility of using digital techniques to minimize these adverse effects and improve the radiometer performance. Analysis and preliminary test results comparing the digital and analog processing approaches in radiometers design were analyzed.

  4. Application of experimental and numerical simulation techniques to microscale devices

    NASA Astrophysics Data System (ADS)

    Somashekar, Vishwanath

    Two of the areas that have become relevant recently are the areas of mixing in micro-scale devices, and manufacturing of functional nanoparticles. MicroPIV experiments were performed on two different mixers, one a wide microchannel with the surface grooves, in the laminar regime, and the other, a confined impinging jets reactor, in the laminar and turbulent regimes. In the wide microchannel with surface grooves, microPIV data were collected at the interface and the midplane at the Reynolds numbers of 0.08, 0.8, and 8. The experiments were performed on three internal angles of the chevrons, namely 135°, 90°, and 45°. The normalized transverse velocity generated in the midplane due to the presence of the grooves, is the strongest for the internal angle of 135°, and in that, the normalized transverse velocity is maximum at the Reynolds numbers of 0.08 and 0.8. MicroPIV experiments were performed in a confined impinging jets reactors at Reynolds numbers of 200, 1000, and 1500. The data was collected in the midplane, and turbulent statistics were further computed. The high velocity jets impinge along the centerline of the reactor. Upon impinging, part of the fluid turns towards the top wall and the majority of it turn towards the outlet. This high velocity impingement causes and unstable zone called the impingement zone, which moves about the centerline line, causing the jets to flap back and forth. Spatial correlations were computed to get an estimate of the size of the coherent structures. Large eddy simulation was performed on the CIJR for the Reynolds numbers of 1000 and 1500, using OpenFOAM. The Reynolds number is based on the inlet jet hydraulic diameter. Excellent agreement was found with the experimental and simulation data. Turbulent reactive mixing in a rectangular microscale confined impinging-jets reactor (CIJR) was investigated using the pH indicator phenolphthalein in this study for three different jet Reynolds numbers of 25, 1000 and 1500. Laminar

  5. Intracanal placement of calcium hydroxide: a comparison of specially designed paste carrier technique with other techniques

    PubMed Central

    2013-01-01

    Background This study compared the effectiveness of a Specially Designed Paste Carrier technique with the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. Methods Three groups, each containing 15 single-rooted human anterior teeth were prepared using standardized Mtwo rotary instruments to a master apical file size 40 with 0.04 taper. Each group was filled with calcium hydroxide paste using: Syringe and #25 finger spreader (Group 1); Syringe and #4 rotary Lentulo spiral (Group 2), Specially Designed Paste Carrier (Group 3). Using pre-filling and post-filling radiographs in buccolingual and mesiodistal planes, the radiodensities at 1 mm, 3 mm, 5 mm, and 7 mm from the apical foramen were analyzed by ANOVA and Bonferroni post hoc tests. Results Overall, The Specially Designed Paste Carrier technique showed a statistically significantly higher mean radiodensity than the two other compared techniques. No significant difference was detected between the Syringe-Lentulo spiral and the Syringe-Spreader techniques. Conclusion The Specially Designed Paste Carrier technique was more effective than the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. PMID:24098931

  6. Process Model Construction and Optimization Using Statistical Experimental Design,

    DTIC Science & Technology

    1988-04-01

    Memo No. 88-442 ~LECTE March 1988 31988 %,.. MvAY 1 98 0) PROCESS MODEL CONSTRUCTION AND OPTIMIZATION USING STATISTICAL EXPERIMENTAL DESIGN Emmanuel...Sachs and George Prueger Abstract A methodology is presented for the construction of process models by the combination of physically based mechanistic...253-8138. .% I " Process Model Construction and Optimization Using Statistical Experimental Design" by Emanuel Sachs Assistant Professor and George

  7. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  8. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  9. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  10. Messages as Experimental Stimuli: Design, Analysis, and Inference.

    ERIC Educational Resources Information Center

    Slater, Michael D.

    The use of messages as experimental stimuli brings with it problems regarding the interpretability and generalizability of findings. Some psychologists and communication researchers have argued that message stimuli must be treated as random effects. A review of the literature examined data regarding experimental designs used in recent…

  11. Optimal multiobjective design of digital filters using spiral optimization technique.

    PubMed

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2013-01-01

    The multiobjective design of digital filters using spiral optimization technique is considered in this paper. This new optimization tool is a metaheuristic technique inspired by the dynamics of spirals. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the spiral optimization technique produced filters which fulfill the desired characteristics and are of practical use.

  12. Optimum Experimental Design applied to MEMS accelerometer calibration for 9-parameter auto-calibration model.

    PubMed

    Ye, Lin; Su, Steven W

    2015-01-01

    Optimum Experimental Design (OED) is an information gathering technique used to estimate parameters, which aims to minimize the variance of parameter estimation and prediction. In this paper, we further investigate an OED for MEMS accelerometer calibration of the 9-parameter auto-calibration model. Based on a linearized 9-parameter accelerometer model, we show the proposed OED is both G-optimal and rotatable, which are the desired properties for the calibration of wearable sensors for which only simple calibration devices are available. The experimental design is carried out with a newly developed wearable health monitoring device and desired experimental results have been achieved.

  13. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  14. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  15. Robust measurement selection for biochemical pathway experimental design.

    PubMed

    Brown, Martin; He, Fei; Yeung, Lam Fat

    2008-01-01

    As a general lack of quantitative measurement data for pathway modelling and parameter identification process, time-series experimental design is particularly important in current systems biology research. This paper mainly investigates state measurement/observer selection problem when parametric uncertainties are considered. Based on the extension of optimal design criteria, two robust experimental design strategies are investigated, one is the regularisation-based design method, and the other is Taguchi-based design approach. By implementing to a simplified IkappaBalpha - NF - kappaB signalling pathway system, two design approaches are comparatively studied. When large parametric uncertainty is present, by assuming that different parametric uncertainties are identical in scale, two methods tend to provide a similar uniform design result.

  16. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  17. Designing modulators of monoamine transporters using virtual screening techniques

    PubMed Central

    Mortensen, Ole V.; Kortagere, Sandhya

    2015-01-01

    The plasma-membrane monoamine transporters (MATs), including the serotonin (SERT), norepinephrine (NET) and dopamine (DAT) transporters, serve a pivotal role in limiting monoamine-mediated neurotransmission through the reuptake of their respective monoamine neurotransmitters. The transporters are the main target of clinically used psychostimulants and antidepressants. Despite the availability of several potent and selective MAT substrates and inhibitors the continuing need for therapeutic drugs to treat brain disorders involving aberrant monoamine signaling provides a compelling reason to identify novel ways of targeting and modulating the MATs. Designing novel modulators of MAT function have been limited by the lack of three dimensional structure information of the individual MATs. However, crystal structures of LeuT, a bacterial homolog of MATs, in a substrate-bound occluded, substrate-free outward-open, and an apo inward-open state and also with competitive and non-competitive inhibitors have been determined. In addition, several structures of the Drosophila DAT have also been resolved. Together with computational modeling and experimental data gathered over the past decade, these structures have dramatically advanced our understanding of several aspects of SERT, NET, and DAT transporter function, including some of the molecular determinants of ligand interaction at orthosteric substrate and inhibitor binding pockets. In addition progress has been made in the understanding of how allosteric modulation of MAT function can be achieved. Here we will review all the efforts up to date that has been made through computational approaches employing structural models of MATs to design small molecule modulators to the orthosteric and allosteric sites using virtual screening techniques. PMID:26483692

  18. Design techniques for low-voltage analog integrated circuits

    NASA Astrophysics Data System (ADS)

    Rakús, Matej; Stopjaková, Viera; Arbet, Daniel

    2017-08-01

    In this paper, a review and analysis of different design techniques for (ultra) low-voltage integrated circuits (IC) are performed. This analysis shows that the most suitable design methods for low-voltage analog IC design in a standard CMOS process include techniques using bulk-driven MOS transistors, dynamic threshold MOS transistors and MOS transistors operating in weak or moderate inversion regions. The main advantage of such techniques is that there is no need for any modification of standard CMOS structure or process. Basic circuit building blocks like differential amplifiers or current mirrors designed using these approaches are able to operate with the power supply voltage of 600 mV (or even lower), which is the key feature towards integrated systems for modern portable applications.

  19. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  20. Fundamentals of experimental design: lessons from beyond the textbook world

    USDA-ARS?s Scientific Manuscript database

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  1. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  2. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  3. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  4. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  5. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  6. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  7. Study/Experimental/Research Design: Much More Than Statistics

    PubMed Central

    Knight, Kenneth L.

    2010-01-01

    Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054

  8. Study/experimental/research design: much more than statistics.

    PubMed

    Knight, Kenneth L

    2010-01-01

    The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes "Methods" sections hard to read and understand. To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

  9. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  10. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  11. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  12. Experimental comparison of manufacturing techniques of toughened and nanoreinforced polyamides

    NASA Astrophysics Data System (ADS)

    Siengchin, S.; Bergmann, C.; Dangtungee, R.

    2011-11-01

    Composites consisting of polyamide-6 (PA-6), nitrile rubber (NBR), and sodium fluorohectorite (FH) or alumina silicate (Sungloss; SG) were produced by different techniques with latex precompounding. Their tensile and thermomechanical properties were determined by using tensile tests and a dynamic-mechanical analysis, performed at various temperatures. The PA-6/NBR composite systems produced by the direct melt compounding outperformed those obtained by using the masterbatch technique with respect to the strength and ductility, but the latter ones had a higher storage modulus.

  13. An experimental investigation of three eigen DF techniques

    NASA Astrophysics Data System (ADS)

    Johnson, Richard L.

    1992-07-01

    This paper considers the comparative direction-finding (DF) performance of multiple signal classification (MUSIC), ROOT-MUSIC, and estimation of signal parameters via rotational invariance technique (ESPRIT). Data were collected from two target transmitters operating simultaneously. The objective of the experiment was to evaluate multipath resolution capability using measurement precision equivalent to that found in modern radio direction-finding systems.

  14. Experimental Frontal Bone Cranioplasty Using a Cellulose Acetate Filter Technique.

    DTIC Science & Technology

    Autogenous marrow-cancellous bone chips were transplanted into 20 canine frontal sinus cavities and the anterior wall surgical defect covered with a...manipulation. This technique did not given a normal frontal contour unless the frontal sinus was completely filled with the autogeneous graft at the time of transplant surgery. (Author)

  15. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  16. Microelectronics package design using experimentally-validated modeling and simulation.

    SciTech Connect

    Johnson, Jay Dean; Young, Nathan Paul; Ewsuk, Kevin Gregory

    2010-11-01

    Packaging high power radio frequency integrated circuits (RFICs) in low temperature cofired ceramic (LTCC) presents many challenges. Within the constraints of LTCC fabrication, the design must provide the usual electrical isolation and interconnections required to package the IC, with additional consideration given to RF isolation and thermal management. While iterative design and prototyping is an option for developing RFIC packaging, it would be expensive and most likely unsuccessful due to the complexity of the problem. To facilitate and optimize package design, thermal and mechanical simulations were used to understand and control the critical parameters in LTCC package design. The models were validated through comparisons to experimental results. This paper summarizes an experimentally-validated modeling approach to RFIC package design, and presents some results and key findings.

  17. Characterizing the Experimental Procedure in Science Laboratories: A Preliminary Step towards Students Experimental Design

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-01-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to…

  18. Optimal Multiobjective Design of Digital Filters Using Taguchi Optimization Technique

    NASA Astrophysics Data System (ADS)

    Ouadi, Abderrahmane; Bentarzi, Hamid; Recioui, Abdelmadjid

    2014-01-01

    The multiobjective design of digital filters using the powerful Taguchi optimization technique is considered in this paper. This relatively new optimization tool has been recently introduced to the field of engineering and is based on orthogonal arrays. It is characterized by its robustness, immunity to local optima trapping, relative fast convergence and ease of implementation. The objectives of filter design include matching some desired frequency response while having minimum linear phase; hence, reducing the time response. The results demonstrate that the proposed problem solving approach blended with the use of the Taguchi optimization technique produced filters that fulfill the desired characteristics and are of practical use.

  19. Reliability of single sample experimental designs: comfortable effort level.

    PubMed

    Brown, W S; Morris, R J; DeGroot, T; Murry, T

    1998-12-01

    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  20. Experimental techniques for cross-section measurements. [for electron impacts

    NASA Technical Reports Server (NTRS)

    Trajmar, S.; Register, D. F.

    1984-01-01

    Attention is given to electron collision phenomena which can be studied under single-collision conditions at low and intermediate electron impact energies, ranging from threshold to a few hundred eV, using gas phase molecular targets. Several of the experimental methods discussed were first developed and applied to atoms, but are equally applicable to molecules with minor modifications in the interpretation of the data, due to the greater complexity of molecular systems.

  1. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  2. Development and Validation of a Hypersonic Vehicle Design Tool Based On Waverider Design Technique

    NASA Astrophysics Data System (ADS)

    Dasque, Nastassja

    Methodologies for a tool capable of assisting design initiatives for practical waverider based hypersonic vehicles were developed and validated. The design space for vehicle surfaces was formed using an algorithm that coupled directional derivatives with the conservation laws to determine a flow field defined by a set of post-shock streamlines. The design space is used to construct an ideal waverider with a sharp leading edge. A blunting method was developed to modify the ideal shapes to a more practical geometry for real-world application. Empirical and analytical relations were then systematically applied to the resulting geometries to determine local pressure, skin-friction and heat flux. For the ideal portion of the geometry, flat plate relations for compressible flow were applied. For the blunted portion of the geometry modified Newtonian theory, Fay-Riddell theory and Modified Reynolds analogy were applied. The design and analysis methods were validated using analytical solutions as well as empirical and numerical data. The streamline solution for the flow field generation technique was compared with a Taylor-Maccoll solution and showed very good agreement. The relationship between the local Stanton number and skin friction coefficient with local Reynolds number along the ideal portion of the body showed good agreement with experimental data. In addition, an automated grid generation routine was formulated to construct a structured mesh around resulting geometries in preparation for Computational Fluid Dynamics analysis. The overall analysis of the waverider body using the tool was then compared to CFD studies. The CFD flow field showed very good agreement with the design space. However, the distribution of the surface properties was near CFD results but did not have great agreement.

  3. Inverse boundary-layer technique for airfoil design

    NASA Technical Reports Server (NTRS)

    Henderson, M. L.

    1979-01-01

    A description is presented of a technique for the optimization of airfoil pressure distributions using an interactive inverse boundary-layer program. This program allows the user to determine quickly a near-optimum subsonic pressure distribution which meets his requirements for lift, drag, and pitching moment at the desired flow conditions. The method employs an inverse turbulent boundary-layer scheme for definition of the turbulent recovery portion of the pressure distribution. Two levels of pressure-distribution architecture are used - a simple roof top for preliminary studies and a more complex four-region architecture for a more refined design. A technique is employed to avoid the specification of pressure distributions which result in unrealistic airfoils, that is, those with negative thickness. The program allows rapid evaluation of a designed pressure distribution off-design in Reynolds number, transition location, and angle of attack, and will compute an airfoil contour for the designed pressure distribution using linear theory.

  4. Online and offline experimental techniques for polycyclic aromatic hydrocarbons recovery and measurement.

    PubMed

    Comandini, A; Malewicki, T; Brezinsky, K

    2012-03-01

    The implementation of techniques aimed at improving engine performance and reducing particulate matter (PM) pollutant emissions is strongly influenced by the limited understanding of the polycyclic aromatic hydrocarbons (PAH) formation chemistry, in combustion devices, that produces the PM emissions. New experimental results which examine the formation of multi-ring compounds are required. The present investigation focuses on two techniques for such an experimental examination by recovery of PAH compounds from a typical combustion oriented experimental apparatus. The online technique discussed constitutes an optimal solution but not always feasible approach. Nevertheless, a detailed description of a new online sampling system is provided which can serve as reference for future applications to different experimental set-ups. In comparison, an offline technique, which is sometimes more experimentally feasible but not necessarily optimal, has been studied in detail for the recovery of a variety of compounds with different properties, including naphthalene, biphenyl, and iodobenzene. The recovery results from both techniques were excellent with an error in the total carbon balance of around 10% for the online technique and an uncertainty in the measurement of the single species of around 7% for the offline technique. Although both techniques proved to be suitable for measurement of large PAH compounds, the online technique represents the optimal solution in view of the simplicity of the corresponding experimental procedure. On the other hand, the offline technique represents a valuable solution in those cases where the online technique cannot be implemented.

  5. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  6. Theory and experimental technique for nondestructive evaluation of ceramic composites

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    1990-01-01

    The important ultrasonic scattering mechanisms for SiC and Si3N4 ceramic composites were identified by examining the interaction of ultrasound with individual fibers, pores, and grains. The dominant scattering mechanisms were identified as asymmetric refractive scattering due to porosity gradients in the matrix material, and symmetric diffractive scattering at the fiber-to-matrix interface and at individual pores. The effect of the ultrasonic reflection coefficient and surface roughness in the ultrasonic evaluation was highlighted. A new nonintrusive ultrasonic evaluation technique, angular power spectrum scanning (APSS), was presented that is sensitive to microstructural variations in composites. Preliminary results indicate that APSS will yield information on the composite microstructure that is not available by any other nondestructive technique.

  7. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  8. The Photoshop Smile Design technique (part 1): digital dental photography.

    PubMed

    McLaren, Edward A; Garber, David A; Figueira, Johan

    2013-01-01

    The proliferation of digital photography and imaging devices is enhancing clinicians' ability to visually document patients' intraoral conditions. By understanding the elements of esthetics and learning how to incorporate technology applications into clinical dentistry, clinicians can predictably plan smile design and communicate anticipated results to patients and ceramists alike. This article discusses camera, lens, and flash selection and setup, and how to execute specific types of images using the Adobe Photoshop Smile Design (PSD) technique.

  9. A novel technique for experimental stellate ganglion block in rats.

    PubMed

    Abdi, Salahadin; Yang, Zongqi

    2005-08-01

    A stellate ganglion block (SGB) is routinely performed in a clinical setting for the treatment of sympathetically maintained pain syndromes. However, the cardiovascular effects of SGB have not been well defined. The purpose of the present study was to develop a new technique of SGB in a rat model. Our new technique of SGB is a posterior percutaneous approach and uses the cartilaginous process of the C7 spinous process as a landmark. Twenty-six Sprague-Dawley female rats were divided into six groups. Group I (n = 4) underwent right sided SGB, Group II (n = 5) underwent left-sided SGB, and Group III (n = 5) underwent bilateral SGB using bupivacaine 0.25%. Three additional sham groups (n = 4 in each group) served as controls to each of the three treatment groups. Ipsilateral eyelid droop (ptosis) was observed in all animals that underwent SGB with bupivacaine. Heart rate decreased significantly for up to 45 min after bilateral SGB compared with control groups. However, this value did not change in rats after unilateral SGB. In 9 additional rats, we evaluated the accuracy of SGB by injecting methylene blue to stain the right (n = 3), left (n = 3), and bilateral SGB (n = 3). At autopsy, 11 of 12 SG were stained post-methylene blue injection. We conclude from our study that our new approach, posterior percutaneous SGB is a reliable technique that can be used for further studies. We describe a new technique for stellate ganglion block in rats that may be used in future studies to investigate the role of cervical sympathetic nervous system (especially the stellate ganglion) in regulating sympathetically maintained pain and myocardial function.

  10. Analysis of Quality Design Techniques for Electrostatic Actuators

    NASA Astrophysics Data System (ADS)

    Sadeghian, H.; Doniavi, A.

    2006-04-01

    A three-step technique for MEMS quality optimization is presented. It exploits the relative merits of its constituent optimization components. Analog Devices accelerometer was the test device with microstructure dimensions serving as design parameters [6]. Manufacturing design rules [7] and sensor performance requirements served as design constraints. A static model relating input acceleration to sensed voltage was used, neglecting sensor and signal conditioning dynamics. The trimming yield of the ADXL50[6], with sensitivity as the design target, was improved by 37%. We show that the three-step method enables the search for an optimal design in a semi-automatic manner by facilitating user interaction. Our final goal is to enable the generation of MEMS mask layout while ensuring robust designs with minimum sensitivity to fabrication process variations

  11. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    Konampurath George, Aby; Singh, Harpreet

    2016-12-02

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  12. Enzyme-Free Scalable DNA Digital Design Techniques: A Review.

    PubMed

    George, Aby K; Singh, Harpreet

    2016-12-01

    With the recent developments in DNA nanotechnology, DNA has been used as the basic building block for the design of nanostructures, autonomous molecular motors, various devices, and circuits. DNA is considered as a possible candidate for replacing silicon for designing digital circuits in a near future, especially in implantable medical devices, because of its parallelism, computational powers, small size, light weight, and compatibility with bio-signals. The research in DNA digital design is in early stages of development, and electrical and computer engineers are not much attracted towards this field. In this paper, we give a brief review of the existing enzyme-free scalable DNA digital design techniques which are recently developed. With the developments in DNA circuits, it would be possible to design synthetic molecular systems, therapeutic molecular devices, and other molecular scale devices and instruments. The ultimate aim will be to build complex digital designs using DNA strands which may even be placed inside a human body.

  13. Optimizing Experimental Design for Comparing Models of Brain Function

    PubMed Central

    Daunizeau, Jean; Preuschoff, Kerstin; Friston, Karl; Stephan, Klaas

    2011-01-01

    This article presents the first attempt to formalize the optimization of experimental design with the aim of comparing models of brain function based on neuroimaging data. We demonstrate our approach in the context of Dynamic Causal Modelling (DCM), which relates experimental manipulations to observed network dynamics (via hidden neuronal states) and provides an inference framework for selecting among candidate models. Here, we show how to optimize the sensitivity of model selection by choosing among experimental designs according to their respective model selection accuracy. Using Bayesian decision theory, we (i) derive the Laplace-Chernoff risk for model selection, (ii) disclose its relationship with classical design optimality criteria and (iii) assess its sensitivity to basic modelling assumptions. We then evaluate the approach when identifying brain networks using DCM. Monte-Carlo simulations and empirical analyses of fMRI data from a simple bimanual motor task in humans serve to demonstrate the relationship between network identification and the optimal experimental design. For example, we show that deciding whether there is a feedback connection requires shorter epoch durations, relative to asking whether there is experimentally induced change in a connection that is known to be present. Finally, we discuss limitations and potential extensions of this work. PMID:22125485

  14. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  15. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER.

  16. Low Cost Gas Turbine Off-Design Prediction Technique

    NASA Astrophysics Data System (ADS)

    Martinjako, Jeremy

    This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.

  17. Principles of Experimental Design for Big Data Analysis

    PubMed Central

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2016-01-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis. PMID:28883686

  18. Principles of Experimental Design for Big Data Analysis.

    PubMed

    Drovandi, Christopher C; Holmes, Christopher; McGree, James M; Mengersen, Kerrie; Richardson, Sylvia; Ryan, Elizabeth G

    2017-08-01

    Big Datasets are endemic, but are often notoriously difficult to analyse because of their size, heterogeneity and quality. The purpose of this paper is to open a discourse on the potential for modern decision theoretic optimal experimental design methods, which by their very nature have traditionally been applied prospectively, to improve the analysis of Big Data through retrospective designed sampling in order to answer particular questions of interest. By appealing to a range of examples, it is suggested that this perspective on Big Data modelling and analysis has the potential for wide generality and advantageous inferential and computational properties. We highlight current hurdles and open research questions surrounding efficient computational optimisation in using retrospective designs, and in part this paper is a call to the optimisation and experimental design communities to work together in the field of Big Data analysis.

  19. Experimental Test of New Technique to Overcome Spin Depolarizing Resonances

    SciTech Connect

    Raymond, R. S.; Chao, A. W.; Krisch, A. D.; Leonova, M. A.; Morozov, V. S.; Sivers, D. W.; Wong, V. K.; Ganshvili, A.; Gebel, R.; Lehrach, A.; Lorentz, B.; Maier, R.; Prasuhn, D.; Stockhorst, H.; Welsch, D.; Hinterberger, F.; Kondratenko, A. M.

    2009-08-04

    We recently tested a new spin resonance crossing technique, Kondratenko Crossing (KC) by sweeping an rf solenoid's frequency through an rf-induced spin resonance with both the KC an traditional Fast Crossing (FC) patterns. Using both rf bunched and unbunched 1.85 GeV/c polarized deuterons stored in COSY, we varied the parameters of both crossing patterns. Compared to FC with the same crossing speed, KC reduced the depolarization by measured factors of 4.7+-0.3 and 19+-{sub 5}{sup 12} for unbunched and bunched beams, respectively. This clearly showed the large potential benefit of Kondratenko Crossing over Fast Crossing.

  20. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  1. EXPLORATION OF NOVEL RESEARCH DESIGNS AND MEASUREMENT TECHNIQUES.

    ERIC Educational Resources Information Center

    CAMPBELL, DONALD T.

    THE GOAL OF THIS PROJECT WAS AN EXPLORATION OF NOVEL RESEARCH DESIGNS AND MEASUREMENT TECHNIQUES SUITABLE FOR EMPLOYMENT IN MEDIA RESEARCH, EDUCATIONAL RESEARCH, OR IN SOCIAL SCIENCES. TWENTY-THREE RESEARCH REPORTS TOGETHER WITH A COVERING MEMORANDUM, PROVIDED THE SUBSTANCE OF THE TECHNICAL REPORT. THE MEMORANDUM INCLUDED THE RATIONALE FOR THE…

  2. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  3. Experimental Techniques for Detecting Correlated Motion of Brownian Particles

    NASA Astrophysics Data System (ADS)

    Franck, Carl; Durand, Richard V.

    1998-03-01

    We, as well as other workers, have noticed using light microscopy that particles undergoing Brownian motion can appear to linger around each other for long periods of time. The question arises as to whether this lingering is a product of interparticle interactions, or is an artifact due to random thermal motion. By way of answering this question, we developed techniques for dealing with the random nature of the particle motion. We present an algorithm to track individual Brownian particles which can handle the disappearance of a particle due to its temporary movement outside of the field of view. We used the tracking data to create randomized particle trajectories which were subsequently compared with the actual trajectories. This backgrounding technique allowed us to extract information about correlated motion in the midst of Brownian noise. We found(R.V. Durand and C. Franck, Phys. Rev. E 56), 1998 (1997). that the aforementioned apparent correlated motion could have been due to observer misperception. Financial support was provided by the National Science Foundation through grant DMR-9320910 and through MRL Central Facilities at the Materials Science Center at Cornell University (DMR-9121654).

  4. An experimental modal testing/identification technique for personal computers

    NASA Technical Reports Server (NTRS)

    Roemer, Michael J.; Schlonski, Steven T.; Mook, D. Joseph

    1990-01-01

    A PC-based system for mode shape identification is evaluated. A time-domain modal identification procedure is utilized to identify the mode shapes of a beam apparatus from discrete time-domain measurements. The apparatus includes a cantilevered aluminum beam, four accelerometers, four low-pass filters, and the computer. The method's algorithm is comprised of an identification algorithm: the Eigensystem Realization Algorithm (ERA) and an estimation algorithm called Minimum Model Error (MME). The identification ability of this algorithm is compared with ERA alone, a frequency-response-function technique, and an Euler-Bernoulli beam model. Detection of modal parameters and mode shapes by the PC-based time-domain system is shown to be accurate in an application with an aluminum beam, while mode shapes identified by the frequency-domain technique are not as accurate as predicted. The new method is shown to be significantly less sensitive to noise and poorly excited modes than other leading methods. The results support the use of time-domain identification systems for mode shape prediction.

  5. Super-smooth surface fabrication technique and experimental research.

    PubMed

    Zhang, Linghua; Wang, Junlin; Zhang, Jian

    2012-09-20

    Wheel polishing, a new optical fabrication technique, is proposed for super-smooth surface fabrication of optical components in high-precision optical instruments. The machining mechanism and the removal function contours are investigated in detail. The elastohydrodynamic lubrication theory is adopted to analyze the deformation of the wheel head, the pressure distribution, and the fluid film thickness distribution in the narrow machining zone. The pressure and the shear stress distributions at the interface between the slurry and the sample are numerically simulated. Practical polishing experiments are arranged to analyze the relationship between the wheel-sample distance and the machining rate. It is demonstrated in this paper that the wheel-sample distance will directly influence the removal function contours. Moreover, ripples on the wheel surface will eventually induce the transverse prints on the removal function contours. The surface roughness of fused silicon is reduced to less than 0.5 nm (rms) from initial 1.267 nm (rms). The wheel polishing technique is feasible for super-smooth surface fabrication.

  6. Thermal-hydraulic design issues and analysis for the ITER (International Thermonuclear Experimental Reactor) divertor

    SciTech Connect

    Koski, J.A.; Watson, R.D. ); Hassanien, A.M. ); Goranson, P.L. . Fusion Engineering Design Center); Salmonson, J.C. . Special Projects)

    1990-01-01

    Critical Heat Flux (CHF), also called burnout, is one of the major design limits for water-cooled divertors in tokamaks. Another important design issue is the correct thermal modeling of the divertor plate geometry where heat is applied to only one side of the plate and highly subcooled flow boiling in internal passages is used for heat removal. This paper discusses analytical techniques developed to address these design issues, and the experimental evidence gathered in support of the approach. Typical water-cooled divertor designs for the International Thermonuclear Experimental Reactor (ITER) are analyzed, and design margins estimated. Peaking of the heat flux at the tube-water boundary is shown to be an important issue, and design concerns which could lead to imposing large design safety margins are identified. The use of flow enhancement techniques such as internal twisted tapes and fins are discussed, and some estimates of the gains in the design margin are presented. Finally, unresolved issues and concerns regarding hydraulic design of divertors are summarized, and some experiments which could help the ITER final design process identified. 23 refs., 10 figs.

  7. Application of hazard assessment techniques in the CISF design process

    SciTech Connect

    Thornton, J.R.; Henry, T.

    1997-10-29

    The Department of Energy has submitted to the NRC staff for review a topical safety analysis report (TSAR) for a Centralized Interim Storage Facility (CISF). The TSAR will be used in licensing the CISF when and if a site is designated. CISF1 design events are identified based on thorough review of design basis events (DBEs) previously identified by dry storage system suppliers and licensees and through the application of hazard assessment techniques. A Preliminary Hazards Assessment (PHA) is performed to identify design events applicable to a Phase 1 non site specific CISF. A PHA is deemed necessary since the Phase 1 CISF is distinguishable from previous dry store applications in several significant operational scope and design basis aspects. In addition to assuring all design events applicable to the Phase 1 CISF are identified, the PHA served as an integral part of the CISF design process by identifying potential important to safety and defense in depth facility design and administrative control features. This paper describes the Phase 1 CISF design event identification process and summarizes significant PHA contributions to the CISF design.

  8. Dynamic Measurement of the J Integral in Ductile Metals: Comparison of Experimental and Numerical Techniques

    DTIC Science & Technology

    1988-08-01

    proven experimental techniques for measruing J under static loading, few proven experimental techniques exist for measurement of the time history of J...Freund[9], who estimate jd by measuring the tran- sient load displacement records and by using the quasi-static formula for deeply notched round bars...HY-100 steel, loaded by a projectile, are compared to experimental measurements performed by means of the interferometric strain- displacement gauge

  9. Experimental techniques for in-ring reaction experiments

    NASA Astrophysics Data System (ADS)

    Mutterer, M.; Egelhof, P.; Eremin, V.; Ilieva, S.; Kalantar-Nayestanaki, N.; Kiselev, O.; Kollmus, H.; Kröll, T.; Kuilman, M.; Chung, L. X.; Najafi, M. A.; Popp, U.; Rigollet, C.; Roy, S.; von Schmid, M.; Streicher, B.; Träger, M.; Yue, K.; Zamora, J. C.; the EXL Collaboration

    2015-11-01

    As a first step of the EXL project scheduled for the New Experimental Storage Ring at FAIR a precursor experiment (E105) was performed at the ESR at GSI. For this experiment, an innovative differential pumping concept, originally proposed for the EXL recoil detector ESPA, was successfully applied. The implementation and essential features of this novel technical concept will be discussed, as well as details on the detectors and the infrastructure around the internal gas-jet target. With 56Ni(p, p)56Ni elastic scattering at 400 MeV u-1, a nuclear reaction experiment with stored radioactive beams was realized for the first time. Finally, perspectives for a next-generation EXL-type setup are briefly discussed.

  10. Demonstration of decomposition and optimization in the design of experimental space systems

    NASA Technical Reports Server (NTRS)

    Padula, Sharon; Sandridge, Chris A.; Haftka, Raphael T.; Walsh, Joanne L.

    1989-01-01

    Effective design strategies for a class of systems which may be termed Experimental Space Systems (ESS) are needed. These systems, which include large space antenna and observatories, space platforms, earth satellites and deep space explorers, have special characteristics which make them particularly difficult to design. It is argued here that these same characteristics encourage the use of advanced computer-aided optimization and planning techniques. The broad goal of this research is to develop optimization strategies for the design of ESS. These strategics would account for the possibly conflicting requirements of mission life, safety, scientific payoffs, initial system cost, launch limitations and maintenance costs. The strategies must also preserve the coupling between disciplines or between subsystems. Here, the specific purpose is to describe a computer-aided planning and scheduling technique. This technique provides the designer with a way to map the flow of data between multidisciplinary analyses. The technique is important because it enables the designer to decompose the system design problem into a number of smaller subproblems. The planning and scheduling technique is demonstrated by its application to a specific preliminary design problem.

  11. Design of high speed proprotors using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Mccarthy, Thomas R.; Chattopadhyay, Aditi

    1993-01-01

    A multidisciplinary optimization procedure is developed for the design of high speed proprotors. The objectives are to simultaneously maximize the propulsive efficiency in high speed cruise without sacrificing the rotor figure of merit in hover. Since the problem involves multiple design objectives, multiobjective function formulation techniques are used. A derailed two-celled isotropic box beam is used to model the load carrying member within the rotor blade. Constraints are imposed on rotor blade aeroelastic stability in cruise, the first natural frequency in hover and total blade weight. Both aerodynamic and structural design variables are used. The results obtained using both techniques are compared to the reference rotor and show significant aerodynamic performance improvements without sacrificing dynamic and aeroelastic stability requirements.

  12. A technique for determining solar irradiance deficits. [photovoltaic arrays design

    NASA Technical Reports Server (NTRS)

    Gonzalez, C. C.; Ross, R. G., Jr.

    1982-01-01

    An analytic technique which determines the variation of solar irradiance from long term averages is presented. The technique involves computer-assisted data reduction techniques, and was designed to improve system reliability by determining the amount of storage capability required to supplement a baseline system. Variations in time intervals of up to 60 days can be determined, and 10 years of data collection are reviewed. The technique involves first calculating average monthly irradiance values, then examining the average irradiance deviation over time intervals. The calculation procedure is clarified by determining solar energy level probabilities and the long term solar energy deviation (achieved by repeatedly integrating actual irradiance figures). It is found that a 15% increase in collector area and the addition of energy storage or backup are essential contributions to achieving cost-effectiveness. In addition, one to seven no-sun day storage capacities are required to accommodate weather caused deficits.

  13. Experimental and Computational Techniques in Soft Condensed Matter Physics

    NASA Astrophysics Data System (ADS)

    Olafsen, Jeffrey

    2010-09-01

    1. Microscopy of soft materials Eric R. Weeks; 2. Computational methods to study jammed Systems Carl F. Schrek and Corey S. O'Hern; 3. Soft random solids: particulate gels, compressed emulsions and hybrid materials Anthony D. Dinsmore; 4. Langmuir monolayers Michael Dennin; 5. Computer modeling of granular rheology Leonardo E. Silbert; 6. Rheological and microrheological measurements of soft condensed matter John R. de Bruyn and Felix K. Oppong; 7. Particle-based measurement techniques for soft matter Nicholas T. Ouellette; 8. Cellular automata models of granular flow G. William Baxter; 9. Photoelastic materials Brian Utter; 10. Image acquisition and analysis in soft condensed matter Jeffrey S. Olafsen; 11. Structure and patterns in bacterial colonies Nicholas C. Darnton.

  14. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  15. Model Selection in Systems Biology Depends on Experimental Design

    PubMed Central

    Silk, Daniel; Kirk, Paul D. W.; Barnes, Chris P.; Toni, Tina; Stumpf, Michael P. H.

    2014-01-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis. PMID:24922483

  16. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  17. Model-Based Optimal Experimental Design for Complex Physical Systems

    DTIC Science & Technology

    2015-12-03

    NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) Jean-Luc Cambier Program Officer, Computational Mathematics , AFOSR/RTA 875 N...computational tools have been inadequate. Our goal has been to develop new mathematical formulations, estimation approaches, and approximation strategies...previous suboptimal approaches. 15. SUBJECT TERMS computational mathematics ; optimal experimental design; uncertainty quantification; Bayesian inference

  18. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  19. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  20. Model selection in systems biology depends on experimental design.

    PubMed

    Silk, Daniel; Kirk, Paul D W; Barnes, Chris P; Toni, Tina; Stumpf, Michael P H

    2014-06-01

    Experimental design attempts to maximise the information available for modelling tasks. An optimal experiment allows the inferred models or parameters to be chosen with the highest expected degree of confidence. If the true system is faithfully reproduced by one of the models, the merit of this approach is clear - we simply wish to identify it and the true parameters with the most certainty. However, in the more realistic situation where all models are incorrect or incomplete, the interpretation of model selection outcomes and the role of experimental design needs to be examined more carefully. Using a novel experimental design and model selection framework for stochastic state-space models, we perform high-throughput in-silico analyses on families of gene regulatory cascade models, to show that the selected model can depend on the experiment performed. We observe that experimental design thus makes confidence a criterion for model choice, but that this does not necessarily correlate with a model's predictive power or correctness. Finally, in the special case of linear ordinary differential equation (ODE) models, we explore how wrong a model has to be before it influences the conclusions of a model selection analysis.

  1. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  2. Using Propensity Score Methods to Approximate Factorial Experimental Designs

    ERIC Educational Resources Information Center

    Dong, Nianbo

    2011-01-01

    The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…

  3. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  4. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  5. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  6. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  7. Experimental design to evaluate directed adaptive mutation in Mammalian cells.

    PubMed

    Bordonaro, Michael; Chiaro, Christopher R; May, Tobias

    2014-12-09

    We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. We performed the initial stages of characterizing our system and have limited preliminary data

  8. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  9. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  10. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  11. Time-Dependent Reversible-Irreversible Deformation Threshold Determined Explicitly by Experimental Technique

    NASA Technical Reports Server (NTRS)

    Castelli, Michael G.; Arnold, Steven M.

    2000-01-01

    Structural materials for the design of advanced aeropropulsion components are usually subject to loading under elevated temperatures, where a material's viscosity (resistance to flow) is greatly reduced in comparison to its viscosity under low-temperature conditions. As a result, the propensity for the material to exhibit time-dependent deformation is significantly enhanced, even when loading is limited to a quasi-linear stress-strain regime as an effort to avoid permanent (irreversible) nonlinear deformation. An understanding and assessment of such time-dependent effects in the context of combined reversible and irreversible deformation is critical to the development of constitutive models that can accurately predict the general hereditary behavior of material deformation. To this end, researchers at the NASA Glenn Research Center at Lewis Field developed a unique experimental technique that identifies the existence of and explicitly determines a threshold stress k, below which the time-dependent material deformation is wholly reversible, and above which irreversible deformation is incurred. This technique is unique in the sense that it allows, for the first time, an objective, explicit, experimental measurement of k. The underlying concept for the experiment is based on the assumption that the material s time-dependent reversible response is invariable, even in the presence of irreversible deformation.

  12. Experimental pressure solution creep of quartz by indenter technique

    NASA Astrophysics Data System (ADS)

    Gratier, J.; Guiguet, R.; Renard, F.; Jenatton, L.

    2006-12-01

    The principle of the experiment is to measure the displacement-rate of indenter that dissolve mineral under stress in order to establish creep laws. A stainless steel cylindrical indenter (200 microns diameter) mounted under a free-moving piston is put in contact with a crystal of quartz in presence of its saturated solution. A dead weigh put on the piston sets the stress. The device is maintained within pressure vessel during several weeks or months at constant temperature and fluid pressure. The depths of the dissolution holes are measured at the end of the experiments. Various types of experimental protocols have been used with difference (i) about quartz (synthetic or natural), (ii) about the nature of the solution (Na0H N, H20, dry), (iii) about the way the contact solid/solution/solid is filled (iv) about the relation between stress and optical quartz axis. Results are shown as displacement-rate versus stress relations for the 4 configurations, with always the same temperature (350°C), solution (NaOH N) and fluid pressure (200 MPa) and with several weeks or months of duration. When using dry contact or water no significant hole may be seen. Short durations (days) never allowed measurable hole to develop. The results show a large scattering of displacement-rates for same stress values, even for the same protocol. From observations under microscope two explanations are possible either a strong effect of the roughening of the dissolution interface that evolve with time and that seems to play a crucial role in the displacement-rate versus stress relation or some effects of temporary undersaturating during the experiment due to experimental perturbations. The results also show a large overlapping between the displacement-rates obtained with the 4 protocols. Plotting all the results on the same log-log diagram shows a displacement-rate versus stress relation that fit a power law with a stress exponent of 1.75. Due to the relatively high stress values this is not

  13. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  14. The experimental technique of the G^0 measurement.

    NASA Astrophysics Data System (ADS)

    Roche, Julie

    2001-10-01

    The G^0 experiment(JLab experiment E00-006, D.H. Beck, spokesperson.) will measure the parity-violating asymmetries in elastic electron-nucleon scattering. The experiment will be performed in Hall C at Jefferson Lab using a dedicated apparatus. In order to achieve the required statistical accuracy of the measurements, a super-conducting toroidal spectrometer, with azimuthally symmetric angular acceptance, and an associated cryogenic target have been constructed. The Focal Plane Detectors are arranged in 8 arrays of 16 arc-shaped scintillator pairs providing a fast signal compatible with the high rates of this counting experiment. For the forward angle measurement, custom built electronics will provide a time-of-flight measurement discriminating elastic recoil protons from pions and inelastic protons. For the backward angle measurements, additional scintillators and Cerenkov counters will provide separation of the elastic electrons from inelastic electrons and pions. An overview of the experimental apparatus and method for the G^0 measurement will be presented.

  15. Experimental apparatus and sample preparation techniques for directional solidification

    NASA Astrophysics Data System (ADS)

    Utter, B.; Ragnarsson, R.; Bodenschatz, E.

    2005-01-01

    We describe a directional solidification stage which allows the controlled solidification of transparent organic alloys. We present two variations of the experiment. In one, large aspect ratio sample cells can be rotated with respect to the temperature gradient between runs, allowing full 360° control over in-plane sample orientation. In the other, thin-walled capillaries are pulled through an oil-filled channel which is optimized for high speed solidification studies (V≈5mm/s). The use of large aspect ratio cells (≈11cm diameter rotatable) and long capillary cells (≈38cm) allows solidification for significant times even at rapid solidification rates. We describe in detail material purification, cell construction, and vacuum filling procedures which allow high quality sample preparation completely under inert atmosphere. Succinonitrile is purified using a sublimation apparatus and samples are filled directly from the sublimation chamber. Vacuum-filling epoxied cells produces long-lasting degassed samples. The techniques presented are also suitable for similar materials such as liquid crystals, CBr4, and pivalic acid. Additional features of the experiment include a linear stepper motor and linear temperature gradient (3-150K /cm).

  16. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial.

    PubMed

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-08-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes.

  17. Optimization Techniques for Design Problems in Selected Areas in WSNs: A Tutorial

    PubMed Central

    Ibrahim, Ahmed; Alfa, Attahiru

    2017-01-01

    This paper is intended to serve as an overview of, and mostly a tutorial to illustrate, the optimization techniques used in several different key design aspects that have been considered in the literature of wireless sensor networks (WSNs). It targets the researchers who are new to the mathematical optimization tool, and wish to apply it to WSN design problems. We hence divide the paper into two main parts. One part is dedicated to introduce optimization theory and an overview on some of its techniques that could be helpful in design problem in WSNs. In the second part, we present a number of design aspects that we came across in the WSN literature in which mathematical optimization methods have been used in the design. For each design aspect, a key paper is selected, and for each we explain the formulation techniques and the solution methods implemented. We also provide in-depth analyses and assessments of the problem formulations, the corresponding solution techniques and experimental procedures in some of these papers. The analyses and assessments, which are provided in the form of comments, are meant to reflect the points that we believe should be taken into account when using optimization as a tool for design purposes. PMID:28763039

  18. EXPERIMENTAL DESIGN OF A FLUID-CONTROLLED HOT GAS VALVE

    DTIC Science & Technology

    Effort is described toward development of a hot gas jet reaction valve utilizing boundary layer techniques to control a high pressure, high...temperature gas stream. The result has been the successful design of a hot gas valve in a reaction control system utilizing fluid-controlled bi-stable

  19. System identification and controller design using experimental frequency response data

    NASA Technical Reports Server (NTRS)

    Irwin, R. Dennis

    1990-01-01

    Recent findings from modeling and controller design for the NASA-Marshall Single Structure Control Facility have raised questions regarding the ability of modern control design techniques and modern modeling techniques to deal effectively with the stringent modeling and control design requirements of Large Space Structure Control. A brief and general discussion is presented of the results of studies into the modeling and control issues performed under sponsorship of the NASA/ASEE Summer Faculty Fellowship Program. Several issues are addressed. The first is a study of a modeling technique based on least squares identification of individual transfer functions from measured frequency response data. The second is a study of multiobjective optimization techniques applied to the modeling, or system identification, problem. The third issue is a study into the question of whether multiobjective optimization approaches can be effectively used for control system design using only frequency response data, thereby bypassing the difficult modeling problem. The last issue studied involves the resolution of seeming discrepancies between predicted and measured control computer time delays in the Single Structure Control Facility.

  20. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  1. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  2. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  3. Experimental brain hyperthermia: techniques for heat delivery and thermometry.

    PubMed

    Ryan, T P; Hoopes, P J; Taylor, J H; Strohbehn, J W; Roberts, D W; Douple, E B; Coughlin, C T

    1991-04-01

    An experimental canine brain model was developed to assess the effects of hyperthermia for a range of time and temperature endpoints, delivered within a specified distance of an interstitial microwave antenna in normal brain. The target temperature location was defined radially at 5.0 or 7.5 mm from the microwave source at the longitudinal location of maximum heating along the antenna in the left cerebral cortex. Temperatures were measured with fiberoptic probes in a coronal plane at this location in an orthogonal catheter at 1.0 mm intervals. Six antennas were evaluated, including dipole, modified dipole, and four shorted helical antennas with coil lengths from 0.5 to 3.9 cm. Antenna performance evaluated in tissue equivalent phantom by adjusting frequency at a fixed insertion depth of 7.8 cm or adjusting insertion depth at 915 MHz showed dipoles to be much more sensitive to insertion depth and frequency change than helical antennas. Specific absorption rate (SAR) was measured in a brain/skull phantom and isoSAR contours were plotted. In vivo temperature studies were also used to evaluate antenna performance in large and small canine brain tissues. A helical antenna with a 2.0 cm coil length driven at 915 MHz was chosen for the beagle experiments because of tip heating characteristics, well-localized heating along the coil length, and heating pattern appropriate to the smaller beagle cranial vault. Verification of lesion dimensions in 3-D was obtained by orthogonal MRI scans and histology to document the desired heat effect, which was to obtain an imagable lesion with well-defined blood-brain-barrier breakdown and necrotic zones. The desired lesion size was between 1.5 to 2.5 cm diameter radially, in the coronal plane with the greatest diameter.

  4. Stem cell clonality -- theoretical concepts, experimental techniques, and clinical challenges.

    PubMed

    Glauche, Ingmar; Bystrykh, Leonid; Eaves, Connie; Roeder, Ingo

    2013-04-01

    Here we report highlights of discussions and results presented at an International Workshop on Concepts and Models of Stem Cell Organization held on July 16th and 17th, 2012 in Dresden, Germany. The goal of the workshop was to undertake a systematic survey of state-of-the-art methods and results of clonality studies of tissue regeneration and maintenance with a particular emphasis on the hematopoietic system. The meeting was the 6th in a series of similar conceptual workshops, termed StemCellMathLab,(2) all of which have had the general objective of using an interdisciplinary approach to discuss specific aspects of stem cell biology. The StemCellMathLab 2012, which was jointly organized by the Institute for Medical Informatics and Biometry, Medical Faculty Carl Gustav Carus, Dresden University of Technology and the Institute for Medical Informatics, Statistics and Epidemiology, Medical Faculty, University of Leipzig, brought together 32 scientists from 8 countries, with scientific backgrounds in medicine, cell biology, virology, physics, computer sciences, bioinformatics and mathematics. The workshop focused on the following questions: (1) How heterogeneous are stem cells and their progeny? and (2) What are the characteristic differences in the clonal dynamics between physiological and pathophysiological situations? In discussing these questions, particular emphasis was placed on (a) the methods for quantifying clones and their dynamics in experimental and clinical settings and (b) general concepts and models for their description. In this workshop summary we start with an introduction to the current state of clonality research and a proposal for clearly defined terminology. Major topics of discussion include clonal heterogeneity in unperturbed tissues, clonal dynamics due to physiological and pathophysiological pressures and conceptual and technical issues of clone quantification. We conclude that an interactive cross-disciplinary approach to research in this

  5. Translocations of amphibians: Proven management method or experimental technique

    USGS Publications Warehouse

    Seigel, Richard A.; Dodd, C. Kenneth

    2002-01-01

    In an otherwise excellent review of metapopulation dynamics in amphibians, Marsh and Trenham (2001) make the following provocative statements (emphasis added): If isolation effects occur primarily in highly disturbed habitats, species translocations may be necessary to promote local and regional population persistence. Because most amphibians lack parental care, they areprime candidates for egg and larval translocations. Indeed, translocations have already proven successful for several species of amphibians. Where populations are severely isolated, translocations into extinct subpopulations may be the best strategy to promote regional population persistence. We take issue with these statements for a number of reasons. First, the authors fail to cite much of the relevant literature on species translocations in general and for amphibians in particular. Second, to those unfamiliar with current research in amphibian conservation biology, these comments might suggest that translocations are a proven management method. This is not the case, at least in most instances where translocations have been evaluated for an appropriate period of time. Finally, the authors fail to point out some of the negative aspects of species translocation as a management method. We realize that Marsh and Trenham's paper was not concerned primarily with translocations. However, because Marsh and Trenham (2001) made specific recommendations for conservation planners and managers (many of whom are not herpetologists or may not be familiar with the pertinent literature on amphibians), we believe that it is essential to point out that not all amphibian biologists are as comfortable with translocations as these authors appear to be. We especially urge caution about advocating potentially unproven techniques without a thorough review of available options.

  6. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  7. Development of a complex experimental system for controlled ecological life support technique

    NASA Astrophysics Data System (ADS)

    Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.

    A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique

  8. New head gradient coil design and construction techniques.

    PubMed

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2014-05-01

    To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. The use of the boundary element method to solve for a gradient coil wire pattern on an arbitrary surface allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design was combined with robust manufacturing techniques and novel cooling methods. The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. The ability to adapt an electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. Copyright © 2013 Wiley Periodicals, Inc.

  9. New head gradient coil design and construction techniques

    PubMed Central

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2013-01-01

    Purpose To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. Materials and Methods The use of the Boundary Element Method to solve for a gradient coil wire pattern on an arbitrary surface has allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design has been combined with robust manufacturing techniques and novel cooling methods. Results The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. Conclusion The ability to adapt your electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. PMID:24123485

  10. Alveolar Ridge Split Technique Using Piezosurgery with Specially Designed Tips

    PubMed Central

    Moro, Alessandro; Foresta, Enrico; Falchi, Marco; De Angelis, Paolo; D'Amato, Giuseppe; Pelo, Sandro

    2017-01-01

    The treatment of patients with atrophic ridge who need prosthetic rehabilitation is a common problem in oral and maxillofacial surgery. Among the various techniques introduced for the expansion of alveolar ridges with a horizontal bone deficit is the alveolar ridge split technique. The aim of this article is to give a description of some new tips that have been specifically designed for the treatment of atrophic ridges with transversal bone deficit. A two-step piezosurgical split technique is also described, based on specific osteotomies of the vestibular cortex and the use of a mandibular ramus graft as interpositional graft. A total of 15 patients were treated with the proposed new tips by our department. All the expanded areas were successful in providing an adequate width and height to insert implants according to the prosthetic plan and the proposed tips allowed obtaining the most from the alveolar ridge split technique and piezosurgery. These tips have made alveolar ridge split technique simple, safe, and effective for the treatment of horizontal and vertical bone defects. Furthermore the proposed piezosurgical split technique allows obtaining horizontal and vertical bone augmentation. PMID:28246596

  11. Alveolar Ridge Split Technique Using Piezosurgery with Specially Designed Tips.

    PubMed

    Moro, Alessandro; Gasparini, Giulio; Foresta, Enrico; Saponaro, Gianmarco; Falchi, Marco; Cardarelli, Lorenzo; De Angelis, Paolo; Forcione, Mario; Garagiola, Umberto; D'Amato, Giuseppe; Pelo, Sandro

    2017-01-01

    The treatment of patients with atrophic ridge who need prosthetic rehabilitation is a common problem in oral and maxillofacial surgery. Among the various techniques introduced for the expansion of alveolar ridges with a horizontal bone deficit is the alveolar ridge split technique. The aim of this article is to give a description of some new tips that have been specifically designed for the treatment of atrophic ridges with transversal bone deficit. A two-step piezosurgical split technique is also described, based on specific osteotomies of the vestibular cortex and the use of a mandibular ramus graft as interpositional graft. A total of 15 patients were treated with the proposed new tips by our department. All the expanded areas were successful in providing an adequate width and height to insert implants according to the prosthetic plan and the proposed tips allowed obtaining the most from the alveolar ridge split technique and piezosurgery. These tips have made alveolar ridge split technique simple, safe, and effective for the treatment of horizontal and vertical bone defects. Furthermore the proposed piezosurgical split technique allows obtaining horizontal and vertical bone augmentation.

  12. Experimental techniques for evaluating steady-state jet engine performance in an altitude facility

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Young, C. Y.; Antl, R. J.

    1971-01-01

    Jet engine calibration tests were conducted in an altitude facility using a contoured bellmouth inlet duct, four fixed-area water-cooled exhaust nozzles, and an accurately calibrated thrust measuring system. Accurate determination of the airflow measuring station flow coefficient, the flow and thrust coefficients of the exhaust nozzles, and the experimental and theoretical terms in the nozzle gross thrust equation were some of the objectives of the tests. A primary objective was to develop a technique to determine gross thrust for the turbojet engine used in this test that could also be used for future engine and nozzle evaluation tests. The probable error in airflow measurement was found to be approximately 0.6 percent at the bellmouth throat design Mach number of 0.6. The probable error in nozzle gross thrust measurement was approximated 0.6 percent at the load cell full-scale reading.

  13. Design, data analysis and sampling techniques for clinical research.

    PubMed

    Suresh, Karthik; Thomas, Sanjeev V; Suresh, Geetha

    2011-10-01

    Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains various sampling methods that can be appropriately used in medical research with different scenarios and challenges.

  14. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  15. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  16. Development of Experimental Setup of Metal Rapid Prototyping Machine using Selective Laser Sintering Technique

    NASA Astrophysics Data System (ADS)

    Patil, S. N.; Mulay, A. V.; Ahuja, B. B.

    2016-08-01

    Unlike in the traditional manufacturing processes, additive manufacturing as rapid prototyping, allows designers to produce parts that were previously considered too complex to make economically. The shift is taking place from plastic prototype to fully functional metallic parts by direct deposition of metallic powders as produced parts can be directly used for desired purpose. This work is directed towards the development of experimental setup of metal rapid prototyping machine using selective laser sintering and studies the various parameters, which plays important role in the metal rapid prototyping using SLS technique. The machine structure in mainly divided into three main categories namely, (1) Z-movement of bed and table, (2) X-Y movement arrangement for LASER movements and (3) feeder mechanism. Z-movement of bed is controlled by using lead screw, bevel gear pair and stepper motor, which will maintain the accuracy of layer thickness. X-Y movements are controlled using timing belt and stepper motors for precise movements of LASER source. Feeder mechanism is then developed to control uniformity of layer thickness metal powder. Simultaneously, the study is carried out for selection of material. Various types of metal powders can be used for metal RP as Single metal powder, mixture of two metals powder, and combination of metal and polymer powder. Conclusion leads to use of mixture of two metals powder to minimize the problems such as, balling effect and porosity. Developed System can be validated by conducting various experiments on manufactured part to check mechanical and metallurgical properties. After studying the results of these experiments, various process parameters as LASER properties (as power, speed etc.), and material properties (as grain size and structure etc.) will be optimized. This work is mainly focused on the design and development of cost effective experimental setup of metal rapid prototyping using SLS technique which will gives the feel of

  17. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  18. Photographic-assisted prosthetic design technique for the anterior teeth.

    PubMed

    Zaccaria, Massimiliano; Squadrito, Nino

    2015-01-01

    The aim of this article is to propose a standardized protocol for treating all inesthetic anterior maxillary situations using a well-planned clinical and photographic technique. As inesthetic aspects should be treated as a pathology, instruments to make a diagnosis are necessary. The prosthetic design to resolve inesthetic aspects, in respect of the function, should be considered a therapy, and, as such, instruments to make a prognosis are necessary. A perspective study was conducted to compare the involvement of patients with regard to the alterations to be made, initially with only a graphic esthetic previsualization, and later with an intraoral functional and esthetic previsualization. Significantly different results were shown for the two techniques. The instruments and steps necessary for the intraoral functional and esthetic previsualization technique are explained in detail in this article.

  19. Cost Modeling Techniques for Design Maturity. [of reentry vehicles

    NASA Technical Reports Server (NTRS)

    Ruhland, E. W.

    1974-01-01

    Cost modeling techniques and factors which either add to or subtract from these estimates are examined. The most important factors for increasing costs are interfacing subsystems, subsystem design and software maturity. Cost decrease depends on hardware, software, and support equipment availability. A cost modeling analysis for reentry shield and aerodynamic decelerator subsystems of a reentry vehicle is presented. Integration problems for the subsystems are also discussed.

  20. ITER (International Thermonuclear Experimental Reactor) reactor building design study

    SciTech Connect

    Thomson, S.L.; Blevins, J.D.; Delisle, M.W.; Canadian Fusion Fuels Technology Project, Mississauga, ON )

    1989-01-01

    The International Thermonuclear Experimental Reactor (ITER) is at the midpoint of a two-year conceptual design. The ITER reactor building is a reinforced concrete structure that houses the tokamak and associated equipment and systems and forms a barrier between the tokamak and the external environment. It provides radiation shielding and controls the release of radioactive materials to the environment during both routine operations and accidents. The building protects the tokamak from external events, such as earthquakes or aircraft strikes. The reactor building requirements have been developed from the component designs and the preliminary safety analysis. The equipment requirements, tritium confinement, and biological shielding have been studied. The building design in progress requires continuous iteraction with the component and system designs and with the safety analysis. 8 figs.

  1. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  2. TIBER: Tokamak Ignition/Burn Experimental Research. Final design report

    SciTech Connect

    Henning, C.D.; Logan, B.G.; Barr, W.L.; Bulmer, R.H.; Doggett, J.N.; Johnson, B.M.; Lee, J.D.; Hoard, R.W.; Miller, J.R.; Slack, D.S.

    1985-11-01

    The Tokamak Ignition/Burn Experimental Research (TIBER) device is the smallest superconductivity tokamak designed to date. In the design plasma shaping is used to achieve a high plasma beta. Neutron shielding is minimized to achieve the desired small device size, but the superconducting magnets must be shielded sufficiently to reduce the neutron heat load and the gamma-ray dose to various components of the device. Specifications of the plasma-shaping coil, the shielding, coaling, requirements, and heating modes are given. 61 refs., 92 figs., 30 tabs. (WRF)

  3. Optimal active vibration absorber - Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1993-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  4. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  5. Refinement of experimental design and conduct in laboratory animal research.

    PubMed

    Bailoo, Jeremy D; Reichlin, Thomas S; Würbel, Hanno

    2014-01-01

    The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research. © The Author 2014. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  6. Component analyses using single-subject experimental designs: a review.

    PubMed

    Ward-Horner, John; Sturmey, Peter

    2010-01-01

    A component analysis is a systematic assessment of 2 or more independent variables or components that comprise a treatment package. Component analyses are important for the analysis of behavior; however, previous research provides only cursory descriptions of the topic. Therefore, in this review the definition of component analysis is discussed, and a notation system for evaluating the experimental designs of component analyses is described. Thirty articles that included a component analysis were identified via a literature search. The majority of the studies successfully identified a necessary component; however, most of these studies did not evaluate the sufficiency of the necessary component. The notation system may be helpful in developing experimental designs that best suit the purpose of studies aimed at conducting component analyses of treatment packages.

  7. Absolute paleointensity: theory, experimental design and the current database

    NASA Astrophysics Data System (ADS)

    Tauxe, L.; Bowles, J. A.; Gee, J. S.; Genevey, A.; Selkin, P. A.; Yu, Y.

    2006-12-01

    Giuseppe Folgheraiter suggested over a century ago that baked materials could in principle be used to study variations of the Earth's magnetic field intensity in the past although he foresaw great difficulties in establishing the reliability of such data. Over the last decade, enormous progress has been made in laying the theoretical foundations for such studies. In response to a better theoretical understanding has come improvements in experimental design including better tests for the underlying assumptions of linearity, reciprocity, cooling rate and anisotropy. These advances have been accompanied by an explosion of papers presenting new data concerning variations in paleointensity through time. In this talk we will explore the theoretical requirements for absolute paleointensity experiments and review the most robust experimental designs. We will also assess the available data and discuss what information should be made available in databases to help evaluate data reliability.

  8. Wireless Body Area Network (WBAN) design techniques and performance evaluation.

    PubMed

    Khan, Jamil Yusuf; Yuce, Mehmet R; Bulger, Garrick; Harding, Benjamin

    2012-06-01

    In recent years interest in the application of Wireless Body Area Network (WBAN) for patient monitoring applications has grown significantly. A WBAN can be used to develop patient monitoring systems which offer flexibility to medical staff and mobility to patients. Patients monitoring could involve a range of activities including data collection from various body sensors for storage and diagnosis, transmitting data to remote medical databases, and controlling medical appliances, etc. Also, WBANs could operate in an interconnected mode to enable remote patient monitoring using telehealth/e-health applications. A WBAN can also be used to monitor athletes' performance and assist them in training activities. For such applications it is very important that a WBAN collects and transmits data reliably, and in a timely manner to a monitoring entity. In order to address these issues, this paper presents WBAN design techniques for medical applications. We examine the WBAN design issues with particular emphasis on the design of MAC protocols and power consumption profiles of WBAN. Some simulation results are presented to further illustrate the performances of various WBAN design techniques.

  9. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  10. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  11. Experimental Design and Evaluation of Bounded Rationality Using Dimensional Analysis

    DTIC Science & Technology

    1989-05-01

    dimensional analysis; experimental design; cognitive s ork load. pressure is very high), decisionmakers are likely to make INTRODUCION mistakes so...Homogeneity, the following will be shown. The following procedure was used to analyze set of simultaneous algebraic equations must be satisfied. the data...Information theory. H is constant for the by solving the set of algebraic equations, we obtain: experiment, and G depends on the algorithm used by a

  12. Computer aid molecular design based on meta-heuristics techniques

    NASA Astrophysics Data System (ADS)

    Rusu, T.; Bulacovschi, V.

    One of the challenges in modern chemistry is the problem of designing new molecules with desired properties. The traditional approach to this problem are usually expensive and time-consuming iterative process with the scientist or engineer hypothesizing a compound, synthesizing the material, testing for desired properties, and redesigning the candidate if the desired properties are not met. In the last years, a lot of scientists have reached to the conclusion that the artificial intelligence methods can improve/facilitate the design of new macromolecules with desired properties. One of the challenges in computer aid macromolecular design is to avoid local minima. Our paper present the use of meta-heuristics techniques that can solve this problem.

  13. Optimal experimental designs for dose-response studies with continuous endpoints.

    PubMed

    Holland-Letz, Tim; Kopp-Schneider, Annette

    2015-11-01

    In most areas of clinical and preclinical research, the required sample size determines the costs and effort for any project, and thus, optimizing sample size is of primary importance. An experimental design of dose-response studies is determined by the number and choice of dose levels as well as the allocation of sample size to each level. The experimental design of toxicological studies tends to be motivated by convention. Statistical optimal design theory, however, allows the setting of experimental conditions (dose levels, measurement times, etc.) in a way which minimizes the number of required measurements and subjects to obtain the desired precision of the results. While the general theory is well established, the mathematical complexity of the problem so far prevents widespread use of these techniques in practical studies. The paper explains the concepts of statistical optimal design theory with a minimum of mathematical terminology and uses these concepts to generate concrete usable D-optimal experimental designs for dose-response studies on the basis of three common dose-response functions in toxicology: log-logistic, log-normal and Weibull functions with four parameters each. The resulting designs usually require control plus only three dose levels and are quite intuitively plausible. The optimal designs are compared to traditional designs such as the typical setup of cytotoxicity studies for 96-well plates. As the optimal design depends on prior estimates of the dose-response function parameters, it is shown what loss of efficiency occurs if the parameters for design determination are misspecified, and how Bayes optimal designs can improve the situation.

  14. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  15. Development of the Biological Experimental Design Concept Inventory (BEDCI).

    PubMed

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non-expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non-expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines.

  16. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  17. Contact angle hysteresis on polymer substrates established with various experimental techniques, its interpretation, and quantitative characterization.

    PubMed

    Bormashenko, Edward; Bormashenko, Yelena; Whyman, Gene; Pogreb, Roman; Musin, Albina; Jager, Rachel; Barkay, Zahava

    2008-04-15

    The effect of contact angle hysteresis (CAH) was studied on various polymer substrates with traditional and new experimental techniques. The new experimental technique presented in the article is based on the slow deformation of the droplet, thus CAH is studied under the constant volume of the drop in contrast to existing techniques when the volume of the drop is changed under the measurement. The energy of hysteresis was calculated in the framework of the improved Extrand approach. The advancing contact angle established with a new technique is in a good agreement with that measured with the needle-syringe method. The receding angles measured with three experimental techniques demonstrated a very significant discrepancy. The force pinning the triple line responsible for hysteresis was calculated.

  18. Experimental design and desirability function approach for development of novel anticancer nanocarrier delivery systems.

    PubMed

    Rafati, H; Mirzajani, F

    2011-01-01

    The therapeutic effects of anticancer drugs would highly improve if problems with low water solubility and toxic adverse reactions could be solved. In this work, a full factorial experimental design was used to develop a polymeric nanoparticulate delivery system as an alternative technique for anticancer drug delivery. Nanoparticles containing tamoxifen citrate were prepared and characterized using an O/W emulsification-solvent evaporation technique and different analytical methods. Scanning Electron Microscopy (SEM), particle size analysis and High Pressure Liquid Chromatography (HPLC) were used for characterization of nanoparticles. Nanoparticles' characteristics including size, size distribution, drug loading and the efficiency of encapsulation were optimized by means of a full factorial experimental design over the influence of four different independent variables and desirability function using Design-Expert software. The resulting tamoxifen loaded nanoparticles showed the best response with particle sizes less than 200 nm, improved encapsulation efficiency of more than 80% and the optimum loading of above 30%. The overall results demonstrate the implication of desirability functionin experimental design as a beneficial approach in nanoparticle drug delivery design.

  19. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  20. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  1. Design and experimental validation of looped-tube thermoacoustic engine

    NASA Astrophysics Data System (ADS)

    Abduljalil, Abdulrahman S.; Yu, Zhibin; Jaworski, Artur J.

    2011-10-01

    The aim of this paper is to present the design and experimental validation process for a thermoacoustic looped-tube engine. The design procedure consists of numerical modelling of the system using DELTA EC tool, Design Environment for Low-amplitude ThermoAcoustic Energy Conversion, in particular the effects of mean pressure and regenerator configuration on the pressure amplitude and acoustic power generated. This is followed by the construction of a practical engine system equipped with a ceramic regenerator — a substrate used in automotive catalytic converters with fine square channels. The preliminary testing results are obtained and compared with the simulations in detail. The measurement results agree very well on the qualitative level and are reasonably close in the quantitative sense.

  2. Single-Subject Experimental Design for Evidence-Based Practice

    PubMed Central

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2014-01-01

    Purpose Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method The authors discuss the requirements of each design, followed by advantages and disadvantages. The logic and methods for evaluating effects in SSED are reviewed as well as contemporary issues regarding data analysis with SSED data sets. Examples of challenges in executing SSEDs are included. Specific exemplars of how SSEDs have been used in speech-language pathology research are provided throughout. Conclusion SSED studies provide a flexible alternative to traditional group designs in the development and identification of evidence-based practice in the field of communication sciences and disorders. PMID:23071200

  3. Design and experimental study of a novel giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  4. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  5. Design of vibration isolation systems using multiobjective optimization techniques

    NASA Technical Reports Server (NTRS)

    Rao, S. S.

    1984-01-01

    The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.

  6. LeRC rail accelerators - Test designs and diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.

    1984-01-01

    The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed. Previously announced in STAR as N83-35053

  7. Use of advanced modeling techniques to optimize thermal packaging designs.

    PubMed

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  8. Design considerations and construction techniques for successive alkalinity producing systems

    SciTech Connect

    Skovran, G.A.; Clouser, C.R.

    1998-12-31

    Successive Alkalinity Producing Systems (SAPS) have been utilized for several years for the passive treatment of acid mine drainage. The SAPS technology is an effective method for inducing alkalinity to neutralize acid mine water and promote the precipitation of contaminating metals. Several design considerations and construction techniques are important for proper system function and longevity. This paper discusses SAPS design, water collection and introduction to the SAPS, hydraulics of SAPS, construction, operation and maintenance, and safety, and found that these factors were critical to obtaining maximum alkalinity at several SAPS treatment sites in Southwestern Pennsylvania. Taking care to incorporate these factors into future SAPS will aid effective treatment, reduce maintenance costs, and maximize long term effectiveness of successive alkalinity producing systems.

  9. Spent Fuel Transportation Package Performance Study - Experimental Design Challenges

    SciTech Connect

    Snyder, A. M.; Murphy, A. J.; Sprung, J. L.; Ammerman, D. J.; Lopez, C.

    2003-02-25

    Numerous studies of spent nuclear fuel transportation accident risks have been performed since the late seventies that considered shipping container design and performance. Based in part on these studies, NRC has concluded that the level of protection provided by spent nuclear fuel transportation package designs under accident conditions is adequate. [1] Furthermore, actual spent nuclear fuel transport experience showcase a safety record that is exceptional and unparalleled when compared to other hazardous materials transportation shipments. There has never been a known or suspected release of the radioactive contents from an NRC-certified spent nuclear fuel cask as a result of a transportation accident. In 1999 the United States Nuclear Regulatory Commission (NRC) initiated a study, the Package Performance Study, to demonstrate the performance of spent fuel and spent fuel packages during severe transportation accidents. NRC is not studying or testing its current regulations, a s the rigorous regulatory accident conditions specified in 10 CFR Part 71 are adequate to ensure safe packaging and use. As part of this study, NRC currently plans on using detailed modeling followed by experimental testing to increase public confidence in the safety of spent nuclear fuel shipments. One of the aspects of this confirmatory research study is the commitment to solicit and consider public comment during the scoping phase and experimental design planning phase of this research.

  10. Theoretical and experimental analysis of modern zoom lens design

    NASA Astrophysics Data System (ADS)

    Wang, Xiangyang; Liu, Weilin

    2017-02-01

    The need for stability of aberration and correction of images for a zoom lens system should be considered during zooming process. Our work presents detailed theoretical and experimental analysis of multiple moving zoom optical systems. In our work we propose methods to determine the basic parameters of such optical system, the focal lengths of each element of the objective lens and their mutual axial separation. Introduce two different image stability equation and cam curve design method to calculate basic parameters. This type of optical system is widely spread in practice mainly in the field of photographic lenses and in surveying instruments (theodolites, leveling instruments, etc.). Furthermore, the detailed analysis of aberration properties of such optical systems is performed and methods for measuring the focal lengths of individual elements and their mutual distance without the need for disassembling the investigated optical system are presented. Finally according to theoretical and experimental analysis of zoom lens system, a zoom optical system with effective focal length 27-220mm has been design, the first element of such system is fixed, and the other groups can move during zoom process to get a continuity consecutiveness effective focal length (EFL). Using the powerful optimization capabilities of optical design software CODE V; we get the imaging quality analysis such as the modulation transfer function (MTF) etc.

  11. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  12. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  13. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  14. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  15. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  16. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  17. Optimization of formulation variables of benzocaine liposomes using experimental design.

    PubMed

    Mura, Paola; Capasso, Gaetano; Maestrelli, Francesca; Furlanetto, Sandra

    2008-01-01

    This study aimed to optimize, by means of an experimental design multivariate strategy, a liposomal formulation for topical delivery of the local anaesthetic agent benzocaine. The formulation variables for the vesicle lipid phase uses potassium glycyrrhizinate (KG) as an alternative to cholesterol and the addition of a cationic (stearylamine) or anionic (dicethylphosphate) surfactant (qualitative factors); the percents of ethanol and the total volume of the hydration phase (quantitative factors) were the variables for the hydrophilic phase. The combined influence of these factors on the considered responses (encapsulation efficiency (EE%) and percent drug permeated at 180 min (P%)) was evaluated by means of a D-optimal design strategy. Graphic analysis of the effects indicated that maximization of the selected responses requested opposite levels of the considered factors: For example, KG and stearylamine were better for increasing EE%, and cholesterol and dicethylphosphate for increasing P%. In the second step, the Doehlert design, applied for the response-surface study of the quantitative factors, pointed out a negative interaction between percent ethanol and volume of the hydration phase and allowed prediction of the best formulation for maximizing drug permeation rate. Experimental P% data of the optimized formulation were inside the confidence interval (P < 0.05) calculated around the predicted value of the response. This proved the suitability of the proposed approach for optimizing the composition of liposomal formulations and predicting the effects of formulation variables on the considered experimental response. Moreover, the optimized formulation enabled a significant improvement (P < 0.05) of the drug anaesthetic effect with respect to the starting reference liposomal formulation, thus demonstrating its actually better therapeutic effectiveness.

  18. Design optimum frac jobs using virtual intelligence techniques

    NASA Astrophysics Data System (ADS)

    Mohaghegh, Shahab; Popa, Andrei; Ameri, Sam

    2000-10-01

    Designing optimal frac jobs is a complex and time-consuming process. It usually involves the use of a two- or three-dimensional computer model. For the computer models to perform as intended, a wealth of input data is required. The input data includes wellbore configuration and reservoir characteristics such as porosity, permeability, stress and thickness profiles of the pay layers as well as the overburden layers. Among other essential information required for the design process is fracturing fluid type and volume, proppant type and volume, injection rate, proppant concentration and frac job schedule. Some of the parameters such as fluid and proppant types have discrete possible choices. Other parameters such as fluid and proppant volume, on the other hand, assume values from within a range of minimum and maximum values. A potential frac design for a particular pay zone is a combination of all of these parameters. Finding the optimum combination is not a trivial process. It usually requires an experienced engineer and a considerable amount of time to tune the parameters in order to achieve desirable outcome. This paper introduces a new methodology that integrates two virtual intelligence techniques, namely, artificial neural networks and genetic algorithms to automate and simplify the optimum frac job design process. This methodology requires little input from the engineer beyond the reservoir characterizations and wellbore configuration. The software tool that has been developed based on this methodology uses the reservoir characteristics and an optimization criteria indicated by the engineer, for example a certain propped frac length, and provides the detail of the optimum frac design that will result in the specified criteria. An ensemble of neural networks is trained to mimic the two- or three-dimensional frac simulator. Once successfully trained, these networks are capable of providing instantaneous results in response to any set of input parameters. These

  19. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current

  20. Considerations in Writing About Single-Case Experimental Design Studies.

    PubMed

    Skolasky, Richard L

    2016-12-01

    Single-case experimental design (SCED) studies are particularly useful for examining the processes and outcomes of psychological and behavioral studies. Accurate reporting of SCED studies is critical in explaining the study to the reader and allowing replication. This paper outlines important elements that authors should cover when reporting the results of a SCED study. Authors should provide details on the participant, independent and dependent variables under examination, materials and procedures, and data analysis. Particular emphasis should be placed on justifying the assumptions made and explaining how violations of these assumptions may alter the results of the SCED study.

  1. Experimental design methodology: the scientific tool for performance evaluation

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz A.

    1990-09-01

    With the rapid growth of the signal and image processing technology in the last several decades has arisen the need for means of evaluating and comparing the numerous algorithms and systems that are created or are being developed. Performance evaluation, in the past, has been mostly ad hoc and incohesive. In this paper we present a systematic step by step approach for the scientific evaluation of signal and image processing algorithms and systems. This approach is based on the methodology of Experimental Design. We illustrate this method by means of an example from the field of automatic object recognition.

  2. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  3. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  4. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  5. Optimal experimental design to position transducers in ultrasound breast imaging

    NASA Astrophysics Data System (ADS)

    Korta Martiartu, Naiara; Boehm, Christian; Vinard, Nicolas; Jovanović Balic, Ivana; Fichtner, Andreas

    2017-03-01

    We present methods to optimize the setup of a 3D ultrasound tomography scanner for breast cancer detection. This approach provides a systematic and quantitative tool to evaluate different designs and to optimize the con- figuration with respect to predefined design parameters. We consider both, time-of-flight inversion using straight rays and time-domain waveform inversion governed by the acoustic wave equation for imaging the sound speed. In order to compare different designs, we measure their quality by extracting properties from the Hessian operator of the time-of-flight or waveform differences defined in the inverse problem, i.e., the second derivatives with respect to the sound speed. Spatial uncertainties and resolution can be related to the eigenvalues of the Hessian, which provide a good indication of the information contained in the data that is acquired with a given design. However, the complete spectrum is often prohibitively expensive to compute, thus suitable approximations have to be developed and analyzed. We use the trace of the Hessian operator as design criterion, which is equivalent to the sum of all eigenvalues and requires less computational effort. In addition, we suggest to take advantage of the spatial symmetry to extrapolate the 3D experimental design from a set of 2D configurations. In order to maximize the quality criterion, we use a genetic algorithm to explore the space of possible design configurations. Numerical results show that the proposed strategies are capable of improving an initial configuration with uniformly distributed transducers, clustering them around regions with poor illumination and improving the ray coverage of the domain of interest.

  6. Design of vibration compensation interferometer for Experimental Advanced Superconducting Tokamak

    NASA Astrophysics Data System (ADS)

    Yang, Y.; Li, G. S.; Liu, H. Q.; Jie, Y. X.; Ding, W. X.; Brower, D. L.; Zhu, X.; Wang, Z. X.; Zeng, L.; Zou, Z. Y.; Wei, X. C.; Lan, T.

    2014-11-01

    A vibration compensation interferometer (wavelength at 0.532 μm) has been designed and tested for Experimental Advanced Superconducting Tokamak (EAST). It is designed as a sub-system for EAST far-infrared (wavelength at 432.5 μm) poloarimeter/interferometer system. Two Acoustic Optical Modulators have been applied to produce the 1 MHz intermediate frequency. The path length drift of the system is lower than 2 wavelengths within 10 min test, showing the system stability. The system sensitivity has been tested by applying a periodic vibration source on one mirror in the system. The vibration is measured and the result matches the source period. The system is expected to be installed on EAST by the end of 2014.

  7. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  8. Combined application of mixture experimental design and artificial neural networks in the solid dispersion development.

    PubMed

    Medarević, Djordje P; Kleinebudde, Peter; Djuriš, Jelena; Djurić, Zorica; Ibrić, Svetlana

    2016-01-01

    This study for the first time demonstrates combined application of mixture experimental design and artificial neural networks (ANNs) in the solid dispersions (SDs) development. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs were prepared by solvent casting method to improve carbamazepine dissolution rate. The influence of the composition of prepared SDs on carbamazepine dissolution rate was evaluated using d-optimal mixture experimental design and multilayer perceptron ANNs. Physicochemical characterization proved the presence of the most stable carbamazepine polymorph III within the SD matrix. Ternary carbamazepine-Soluplus®-poloxamer 188 SDs significantly improved carbamazepine dissolution rate compared to pure drug. Models developed by ANNs and mixture experimental design well described the relationship between proportions of SD components and percentage of carbamazepine released after 10 (Q10) and 20 (Q20) min, wherein ANN model exhibit better predictability on test data set. Proportions of carbamazepine and poloxamer 188 exhibited the highest influence on carbamazepine release rate. The highest carbamazepine release rate was observed for SDs with the lowest proportions of carbamazepine and the highest proportions of poloxamer 188. ANNs and mixture experimental design can be used as powerful data modeling tools in the systematic development of SDs. Taking into account advantages and disadvantages of both techniques, their combined application should be encouraged.

  9. Design and experimental test of an optical vortex coronagraph

    NASA Astrophysics Data System (ADS)

    Liu, Cheng-Chao; Ren, De-Qing; Zhu, Yong-Tian; Dou, Jiang-Pei

    2017-05-01

    Using an optical vortex coronagraph (OVC) is one of the most promising techniques for directly imaging exoplanets because of its small inner working angle and high throughput. This paper presents the design and laboratory demonstration performance of an OVC based on liquid crystal polymers (LCPs) at 633 nm and 1520 nm. The OVC can deliver good performance in laboratory tests and achieve a contrast of 10-6 at an angular distance of 3λ/D, which can be implemented for imaging young giant exoplanets in combination with extreme adaptive optics.

  10. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  11. Logical Experimental Design and Execution in the Biomedical Sciences.

    PubMed

    Holder, Daniel J; Marino, Michael J

    2017-03-17

    Lack of reproducibility has been highlighted as a significant problem in biomedical research. The present unit is devoted to describing ways to help ensure that research findings can be replicated by others, with a focus on the design and execution of laboratory experiments. Essential components for this include clearly defining the question being asked, using available information or information from pilot studies to aid in the design the experiment, and choosing manipulations under a logical framework based on Mill's "methods of knowing" to build confidence in putative causal links. Final experimental design requires systematic attention to detail, including the choice of controls, sample selection, blinding to avoid bias, and the use of power analysis to determine the sample size. Execution of the experiment is done with care to ensure that the independent variables are controlled and the measurements of the dependent variables are accurate. While there are always differences among laboratories with respect to technical expertise, equipment, and suppliers, execution of the steps itemized in this unit will ensure well-designed and well-executed experiments to answer any question in biomedical research. © 2017 by John Wiley & Sons, Inc.

  12. A fundamental experimental approach for optimal design of speed bumps.

    PubMed

    Lav, A Hakan; Bilgin, Ertugrul; Lav, A Hilmi

    2017-06-02

    Speed bumps and humps are utilized as means of calming traffic and controlling vehicular speed. Needless to say, bumps and humps of large dimensions in length and width force drivers to significantly reduce their driving speeds so as to avoid significant vehicle vertical acceleration. It is thus that this experimental study was conducted with the aim of determining a speed bump design that performs optimally when leading drivers to reduce the speed of their vehicles to safe levels. The first step of the investigation starts off by considering the following question: "What is the optimal design of a speed bump that will - at the same time - reduce the velocity of an incoming vehicle significantly and to a speed that resulting vertical acceleration does not jeopardize road safety? The experiment has been designed to study the dependent variables and collect data in order to propose an optimal design for a speed bump. To achieve this, a scaled model of 1:6 to real life was created to simulate the interaction between a car wheel and a speed bump. During the course of the experiment, a wheel was accelerated down an inclined plane onto a horizontal plane of motion where it was allowed to collide with a speed bump. The speed of the wheel and the vertical acceleration at the speed bump were captured by means of a Vernier Motion Detector. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Finite element analysis and experimental verification of multilayered tissue characterization using the thermal technique.

    PubMed

    Kharalkar, Nachiket M; Valvano, Jonathan W

    2006-01-01

    The objective of this research is to develop noninvasive techniques to determine thermal properties of layered biologic structures based on measurements from the surface. The self-heated thermistor technique is evaluated both numerically and experimentally. The finite element analyses, which confirm the experimental results, are used to study the temperature profiles occurring in the thermistor-tissue system. An in vitro tissue model was constructed by placing Teflon of varying thickness between the biologic tissue and the self-heated thermistor. The experiments were performed using two different-sized thermistors on six tissue samples. A self-heated thermistor was used to determine the thermal conductivity of tissue covered by a thin layer Teflon. The results from experimental data clearly indicate that this technique can penetrate below the thin layers of Teflon and thus is sensitive to the thermal properties of the underlying tissue. The factors which may introduce error in the experimental data are (i) poor thermal/physical contact between the thermistor probe and tissue sample, and (ii) water loss from tissue during the course of experimentation. The finite element analysis was used to simulate the experimental conditions and to calculate transient temperature profile generated by the thermistor bead. The results of finite element analysis are in accordance with the experimental data.

  14. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X. A.

    2011-12-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on the acquisition of suitable datasets. This can be done through the design of an optimal experiment. An optimal experiment maximizes geophysical information while maintaining the cost of the experiment as low as possible. This requires a careful selection of recording parameters as source and receivers locations or range of periods needed to image the target. We are developing a method to design an optimal experiment in the context of detecting and monitoring a CO2 reservoir using controlled-source electromagnetic (CSEM) data. Using an algorithm for a simple one-dimensional (1D) situation, we look for the most suitable locations for source and receivers and optimum characteristics of the source to image the subsurface. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our algorithm is based on a genetic algorithm which has been proved to be an efficient technique to examine a wide range of possible surveys and select the one that gives superior resolution. Each particular design needs to be quantified. Different quantities have been used to estimate the "goodness" of a model, most of them being sensitive to the eigenvalues of the corresponding inversion problem. Here we show a comparison of results obtained using different objective functions. Then, we simulate a CSEM survey with a realistic 1D structure and discuss the optimum recording parameters determined by our method.

  15. Cutting the wires: modularization of cellular networks for experimental design.

    PubMed

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-07

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Cutting the Wires: Modularization of Cellular Networks for Experimental Design

    PubMed Central

    Lang, Moritz; Summers, Sean; Stelling, Jörg

    2014-01-01

    Understanding naturally evolved cellular networks requires the consecutive identification and revision of the interactions between relevant molecular species. In this process, initially often simplified and incomplete networks are extended by integrating new reactions or whole subnetworks to increase consistency between model predictions and new measurement data. However, increased consistency with experimental data alone is not sufficient to show the existence of biomolecular interactions, because the interplay of different potential extensions might lead to overall similar dynamics. Here, we present a graph-based modularization approach to facilitate the design of experiments targeted at independently validating the existence of several potential network extensions. Our method is based on selecting the outputs to measure during an experiment, such that each potential network extension becomes virtually insulated from all others during data analysis. Each output defines a module that only depends on one hypothetical network extension, and all other outputs act as virtual inputs to achieve insulation. Given appropriate experimental time-series measurements of the outputs, our modules can be analyzed, simulated, and compared to the experimental data separately. Our approach exemplifies the close relationship between structural systems identification and modularization, an interplay that promises development of related approaches in the future. PMID:24411264

  17. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  18. Prediction uncertainty and optimal experimental design for learning dynamical systems.

    PubMed

    Letham, Benjamin; Letham, Portia A; Rudin, Cynthia; Browne, Edward P

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  19. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  20. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  1. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  2. Analysis techniques for the design of thermoplastic bumpers

    SciTech Connect

    Nimmer, R.P.; Bailey, O.A.; Paro, T.W.

    1987-01-01

    Increasingly, thermoplastic resins are being applied to automotive components which require structural performance. The work reported in this paper summarizes an ongoing effort to develop efficient mechanical technology for application in the design of thermoplastic bumpers. The technology development has included identification of material properties, investigation of basic component behavior, and finally the development of an automated system of analysis. A basic question often posed with regard to the analysis of structural components made of thermoplastics is whether the appropriate material properties are available and whether available analysis procedures can be applied accurately. This question was addressed through a program of fundamental material characterization, followed by structural component analysis. The analysis was then compared to test results from a parallel experimental program.

  3. Silicone Rubber Superstrate Loaded Patch Antenna Design Using Slotting Technique

    NASA Astrophysics Data System (ADS)

    Kaur, Bhupinder; Saini, Garima; Saini, Ashish

    2016-09-01

    For the protection of antenna from external environmental conditions, there is a need that antenna should be covered with a stable, non-reactive, highly durable and weather resistive material which is insensitive to changing external environment. Hence, in this paper silicone rubber is proposed as a superstrate layer for patch antenna for its protection. The electrical properties of silicon rubber sealant are experimentally found out and its effect of using as superstrate on coaxial fed microstrip patch antenna using transmission line model is observed. The overall performance is degraded by slightly after the use of superstrate. Further to improve the performance of superstrate loaded antenna, patch slots and ground defects have been proposed. The proposed design achieves the wideband of 790 MHz (13.59 %), gain of 7.12 dB, VSWR of 1.12 and efficiency of 83.02 %.

  4. Experimental design in phylogenetics: testing predictions from expected information.

    PubMed

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  5. An Experimental Evaluation of the Effectiveness of Selected Techniques and Resources on Instruction in Vocational Agriculture.

    ERIC Educational Resources Information Center

    Kahler, Alan A.

    The study was designed to test new instructional techniques in vocational agriculture, determine their effectiveness on student achievement, and compare individual and group instructional techniques. Forty-eight randomly selected Iowa high school vocational agriculture programs with enrollments of 35 students or more, were selected for testing the…

  6. Improved Titanium Billet Inspection Sensitivity through Optimized Phased Array Design, Part I: Design Technique, Modeling and Simulation

    SciTech Connect

    Lupien, Vincent; Hassan, Waled

    2006-03-06

    Reductions in the beam diameter and pulse duration of focused ultrasound for titanium inspections are believed to result in a signal-to-noise ratio improvement for embedded defect detection. It has been inferred from this result that detection limits could be extended to smaller defects through a larger diameter, higher frequency transducer resulting in a reduced beamwidth and pulse duration. Using Continuum Probe Designer{sup TM} (Pat. Pending), a transducer array was developed for full coverage inspection of 8 inch titanium billets. The main challenge in realizing a large aperture phased array transducer for billet inspection is ensuring that the number of elements remains within the budget allotted by the driving electronics. The optimization technique implemented by Continuum Probe Designer{sup TM} yields an array with twice the aperture but the same number of elements as existing phased arrays for the same application. The unequal area element design was successfully manufactured and validated both numerically and experimentally. Part I of this two-part series presents the design, simulation and modeling steps, while Part II presents the experimental validation and comparative study to multizone.

  7. Laser induced deflection technique for absolute thin film absorption measurement: optimized concepts and experimental results

    SciTech Connect

    Muehlig, Christian; Kufert, Siegfried; Bublitz, Simon; Speck, Uwe

    2011-03-20

    Using experimental results and numerical simulations, two measuring concepts of the laser induced deflection (LID) technique are introduced and optimized for absolute thin film absorption measurements from deep ultraviolet to IR wavelengths. For transparent optical coatings, a particular probe beam deflection direction allows the absorption measurement with virtually no influence of the substrate absorption, yielding improved accuracy compared to the common techniques of separating bulk and coating absorption. For high-reflection coatings, where substrate absorption contributions are negligible, a different probe beam deflection is chosen to achieve a better signal-to-noise ratio. Various experimental results for the two different measurement concepts are presented.

  8. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  9. Interplanetary mission design techniques for flagship-class missions

    NASA Astrophysics Data System (ADS)

    Kloster, Kevin W.

    Trajectory design, given the current level of propulsive technology, requires knowledge of orbital mechanics, computational resources, extensive use of tools such as gravity-assist and V infinity leveraging, as well as insight and finesse. Designing missions that deliver a capable science package to a celestial body of interest that are robust and affordable is a difficult task. Techniques are presented here that assist the mission designer in constructing trajectories for flagship-class missions in the outer Solar System. These techniques are applied in this work to spacecraft that are currently in flight or in the planning stages. By escaping the Saturnian system, the Cassini spacecraft can reach other destinations in the Solar System while satisfying planetary quarantine. The patched-conic method was used to search for trajectories that depart Saturn via gravity assist at Titan. Trajectories were found that fly by Jupiter to reach Uranus or Neptune, capture at Jupiter or Neptune, escape the Solar System, fly by Uranus during its 2049 equinox, or encounter Centaurs. A "grand tour," which visits Jupiter, Uranus, and Neptune, departs Saturn in 2014. New tools were built to search for encounters with Centaurs, small Solar System bodies between the orbits of Jupiter and Neptune, and to minimize the DeltaV to target these encounters. Cassini could reach Chiron, the first-discovered Centaur, in 10.5 years after a 2022 Saturn departure. For a Europa Orbiter mission, the strategy for designing Jovian System tours that include Io flybys differs significantly from schemes developed for previous versions of the mission. Assuming that the closest approach distance of the incoming hyperbola at Jupiter is below the orbit of Io, then an Io gravity assist gives the greatest energy pump-down for the least decrease in perijove radius. Using Io to help capture the spacecraft can increase the savings in Jupiter orbit insertion DeltaV over a Ganymede-aided capture. The tour design is

  10. Protein design algorithms predict viable resistance to an experimental antifolate.

    PubMed

    Reeve, Stephanie M; Gainza, Pablo; Frey, Kathleen M; Georgiev, Ivelin; Donald, Bruce R; Anderson, Amy C

    2015-01-20

    Methods to accurately predict potential drug target mutations in response to early-stage leads could drive the design of more resilient first generation drug candidates. In this study, a structure-based protein design algorithm (K* in the OSPREY suite) was used to prospectively identify single-nucleotide polymorphisms that confer resistance to an experimental inhibitor effective against dihydrofolate reductase (DHFR) from Staphylococcus aureus. Four of the top-ranked mutations in DHFR were found to be catalytically competent and resistant to the inhibitor. Selection of resistant bacteria in vitro reveals that two of the predicted mutations arise in the background of a compensatory mutation. Using enzyme kinetics, microbiology, and crystal structures of the complexes, we determined the fitness of the mutant enzymes and strains, the structural basis of resistance, and the compensatory relationship of the mutations. To our knowledge, this work illustrates the first application of protein design algorithms to prospectively predict viable resistance mutations that arise in bacteria under antibiotic pressure.

  11. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  12. Experimental design schemes for learning Boolean network models

    PubMed Central

    Atias, Nir; Gershenzon, Michal; Labazin, Katia; Sharan, Roded

    2014-01-01

    Motivation: A holy grail of biological research is a working model of the cell. Current modeling frameworks, especially in the protein–protein interaction domain, are mostly topological in nature, calling for stronger and more expressive network models. One promising alternative is logic-based or Boolean network modeling, which was successfully applied to model signaling regulatory circuits in human. Learning such models requires observing the system under a sufficient number of different conditions. To date, the amount of measured data is the main bottleneck in learning informative Boolean models, underscoring the need for efficient experimental design strategies. Results: We developed novel design approaches that greedily select an experiment to be performed so as to maximize the difference or the entropy in the results it induces with respect to current best-fit models. Unique to our maximum difference approach is the ability to account for all (possibly exponential number of) Boolean models displaying high fit to the available data. We applied both approaches to simulated and real data from the EFGR and IL1 signaling systems in human. We demonstrate the utility of the developed strategies in substantially improving on a random selection approach. Our design schemes highlight the redundancy in these datasets, leading up to 11-fold savings in the number of experiments to be performed. Availability and implementation: Source code will be made available upon acceptance of the manuscript. Contact: roded@post.tau.ac.il PMID:25161232

  13. Experimental analysis of viscous and material damping in microstructures through the interferometric microscopy technique with climatic chamber

    NASA Astrophysics Data System (ADS)

    De Pasquale, Giorgio

    2013-09-01

    This study describes an experimental analysis of energy dissipation due to damping sources in microstructures and micro-electromechanical systems (MEMS) components using interferometric microscopy techniques. Viscous damping caused by the surrounding air (squeeze film damping) and material damping are measured using variable geometrical parameters of samples and under different environmental conditions. The equipment included a self-made climatic chamber which was used to modify the surrounding air pressure. Results show the relationship between damping coefficients and sample geometry caused by variation in airflow resistance and the relationship between quality factor and air pressure. The experimental results will provide a useful data source for validating analytic models and calibrating simulations. A thorough discussion about interferometry applied to experimental mechanics of MEMS will also contribute to the reduction of the knowledge gap between specialists in optical methods and microsystem designers.

  14. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  15. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    PubMed Central

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  16. Design of OFDM radar pulses using genetic algorithm based techniques

    NASA Astrophysics Data System (ADS)

    Lellouch, Gabriel; Mishra, Amit Kumar; Inggs, Michael

    2016-08-01

    The merit of evolutionary algorithms (EA) to solve convex optimization problems is widely acknowledged. In this paper, a genetic algorithm (GA) optimization based waveform design framework is used to improve the features of radar pulses relying on the orthogonal frequency division multiplexing (OFDM) structure. Our optimization techniques focus on finding optimal phase code sequences for the OFDM signal. Several optimality criteria are used since we consider two different radar processing solutions which call either for single or multiple-objective optimizations. When minimization of the so-called peak-to-mean envelope power ratio (PMEPR) single-objective is tackled, we compare our findings with existing methods and emphasize on the merit of our approach. In the scope of the two-objective optimization, we first address PMEPR and peak-to-sidelobe level ratio (PSLR) and show that our approach based on the non-dominated sorting genetic algorithm-II (NSGA-II) provides design solutions with noticeable improvements as opposed to random sets of phase codes. We then look at another case of interest where the objective functions are two measures of the sidelobe level, namely PSLR and the integrated-sidelobe level ratio (ISLR) and propose to modify the NSGA-II to include a constrain on the PMEPR instead. In the last part, we illustrate via a case study how our encoding solution makes it possible to minimize the single objective PMEPR while enabling a target detection enhancement strategy, when the SNR metric would be chosen for the detection framework.

  17. Experimental Aspects of In-Plane Displacement Measurement Using a Moire Fringe Technique.

    DTIC Science & Technology

    1986-06-01

    AD-A174 048 EXPERIMENTAL ASPECTS OF IN-PLANE DISPLACEMENT 1 / 1’ MEASUREMENT USING A NOIRE F.. (U) AERONAUTICAL RESEARCH I LABS MELBOURNE (AUSTRALIA...1ecthnical .ivcinoru, iurn � EXPERIMENTAL ASPEiCTS OF IN-PLAN. DISPLACEMENT MEASURLMEN1 USING A MOIRE FRINGE 1 ECHI4IQUL d ", 00 by O J. D’Cruz, 3.L. L...Memorandum 440 ’,’ 999 ... ,p 9j" EXPERIMENTAL ASPECTS OF IN-PLANE DISPLACEMENT MEASUREMENT USING A MOIRE FRINGE TECHNIQUE by J. D’Cruz, B.L. Lawrie, K.C

  18. Experimental characterization of 3D localization techniques for particle-tracking and super-resolution microscopy.

    PubMed

    Mlodzianoski, Michael J; Juette, Manuel F; Beane, Glen L; Bewersdorf, Joerg

    2009-05-11

    Three-dimensional (3D) particle localization at the nanometer scale plays a central role in 3D particle tracking and 3D localization-based super-resolution microscopy. Here we introduce a localization algorithm that is independent of theoretical models and therefore generally applicable to a large number of experimental realizations. Applying this algorithm and a convertible experimental setup we compare the performance of the two major 3D techniques based on astigmatic distortions and on multiplane detection. In both methods we obtain experimental 3D localization accuracies in agreement with theoretical predictions and characterize the depth dependence of the localization accuracy in detail.

  19. ICE decoupling technique for RF coil array designs

    PubMed Central

    Li, Ye; Xie, Zhentian; Pang, Yong; Vigneron, Daniel; Zhang, Xiaoliang

    2011-01-01

    Purpose: Parallel magnetic resonance imaging (MRI) requires an array of RF coil elements with different sensitivity distributions and with minimal electromagnetic coupling. The goal of this project was to develop a new method based on induced current compensation or elimination (ICE) for improved coil element decoupling and to investigate its performance in phantom MR images. Methods: An electromagnetic decoupling method based on induced current compensation or elimination for nonoverlapping RF coil arrays was developed with the design criteria of high efficiency, easy implementation, and no physical connection to RF array elements. An eigenvalue/eigenvector approach was employed to analyze the decoupling mechanism and condition. A two-channel microstrip array and an eight-channel coil array were built to test the performance of the method. Following workbench tests, MR imaging experiments were performed on a 7T MR scanner. Results: The bench tests showed that both arrays achieved sufficient decoupling with a S21 less than −25 dB among the coil elements at 298 MHz. The MR phantom images demonstrated well-defined sensitivity distributions from each coil element and the unique decoupling capability of the proposed ICE decoupling technique. B1 distributions of the individual elements were also measured and calculated. Conclusions: The theoretical analysis and experiments demonstrated the feasibility of the decoupling method for high field RF coil array designs without overlapping or direct physical connections between coil elements, which provide more flexibility for coil array design and optimization. The method offers a new approach to address the RF array decoupling issue, which is a major challenge in implementing parallel imaging. PMID:21859008

  20. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  1. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  2. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  3. An experimental design method leading to chemical Turing patterns.

    PubMed

    Horváth, Judit; Szalai, István; De Kepper, Patrick

    2009-05-08

    Chemical reaction-diffusion patterns often serve as prototypes for pattern formation in living systems, but only two isothermal single-phase reaction systems have produced sustained stationary reaction-diffusion patterns so far. We designed an experimental method to search for additional systems on the basis of three steps: (i) generate spatial bistability by operating autoactivated reactions in open spatial reactors; (ii) use an independent negative-feedback species to produce spatiotemporal oscillations; and (iii) induce a space-scale separation of the activatory and inhibitory processes with a low-mobility complexing agent. We successfully applied this method to a hydrogen-ion autoactivated reaction, the thiourea-iodate-sulfite (TuIS) reaction, and noticeably produced stationary hexagonal arrays of spots and parallel stripes of pH patterns attributed to a Turing bifurcation. This method could be extended to biochemical reactions.

  4. Effect and interaction study of acetamiprid photodegradation using experimental design.

    PubMed

    Tassalit, Djilali; Chekir, Nadia; Benhabiles, Ouassila; Mouzaoui, Oussama; Mahidine, Sarah; Merzouk, Nachida Kasbadji; Bentahar, Fatiha; Khalil, Abbas

    2016-10-01

    The methodology of experimental research was carried out using the MODDE 6.0 software to study the acetamiprid photodegradation depending on the operating parameters, such as the initial concentration of acetamiprid, concentration and type of the used catalyst and the initial pH of the medium. The results showed the importance of the pollutant concentration effect on the acetamiprid degradation rate. On the other hand, the amount and type of the used catalyst have a considerable influence on the elimination kinetics of this pollutant. The degradation of acetamiprid as an environmental pesticide pollutant via UV irradiation in the presence of titanium dioxide was assessed and optimized using response surface methodology with a D-optimal design. The acetamiprid degradation ratio was found to be sensitive to the different studied factors. The maximum value of discoloration under the optimum operating conditions was determined to be 99% after 300 min of UV irradiation.

  5. Experimental comparison of different oscillation-based test techniques in an analog block

    NASA Astrophysics Data System (ADS)

    Suenaga, Kay; Picos, Rodrigo; Bota, Sebastia; Roca, Miquel; Garcia-Moreno, Eugeni

    2005-06-01

    This paper experimentally analyses the capabilities of an Oscillation-Based Test technique for diagnosis purposes. To evaluate the feasibility of this test strategy, the technique is applied to an Operational Transconductance Amplifier with fault injection capabilities. The application of this methodology has low impact on circuit performances. Voltage and current magnitude have been considered as test observables. The effects of catastrophic and parametric defects (bridges, opens and shorts) are analyzed in this work. Results show that by a right choice of the test observable, this technique provides high fault coverage levels even in the case of process variations.

  6. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are

  7. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  8. A method of fast, sequential experimental design for linearized geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Coles, Darrell A.; Morgan, Frank Dale

    2009-07-01

    An algorithm for linear(ized) experimental design is developed for a determinant-based design objective function. This objective function is common in design theory and is used to design experiments that minimize the model entropy, a measure of posterior model uncertainty. Of primary significance in design problems is computational expediency. Several earlier papers have focused attention on posing design objective functions and opted to use global search methods for finding the critical points of these functions, but these algorithms are too slow to be practical. The proposed technique is distinguished primarily for its computational efficiency, which derives partly from a greedy optimization approach, termed sequential design. Computational efficiency is further enhanced through formulae for updating determinants and matrix inverses without need for direct calculation. The design approach is orders of magnitude faster than a genetic algorithm applied to the same design problem. However, greedy optimization often trades global optimality for increased computational speed; the ramifications of this tradeoff are discussed. The design methodology is demonstrated on a simple, single-borehole DC electrical resistivity problem. Designed surveys are compared with random and standard surveys, both with and without prior information. All surveys were compared with respect to a `relative quality' measure, the post-inversion model per cent rms error. The issue of design for inherently ill-posed inverse problems is considered and an approach for circumventing such problems is proposed. The design algorithm is also applied in an adaptive manner, with excellent results suggesting that smart, compact experiments can be designed in real time.

  9. Design and experimental evaluation of flexible manipulator control algorithms

    SciTech Connect

    Kwon, D.S.; Hwang, D.H.; Babcock, S.M.; Kress, R.L.; Lew, J.Y.; Evans, M.S.

    1995-04-01

    Within the Environmental Restoration and Waste Management Program of the US Department of Energy, the remediation of single-shell radioactive waste storage tanks is one of the areas that challenge state-of-the-art equipment and methods. The use of long-reach manipulators is being seriously considered for this task. Because of high payload capacity and high length-to-cross-section ratio requirements, these long-reach manipulator systems are expected to use hydraulic actuators and to exhibit significant structural flexibility. The controller has been designed to compensate for the hydraulic actuator dynamics by using a load-compensated velocity feedforward loop and to increase the bandwidth by using an inner pressure feedback loop. Shaping filter techniques have been applied as feedforward controllers to avoid structural vibrations during operation. Various types of shaping filter methods have been investigated. Among them, a new approach, referred to as a ``feedforward simulation filter`` that uses embedded simulation, has been presented.

  10. Quasi-experimental designs in practice-based research settings: design and implementation considerations.

    PubMed

    Handley, Margaret A; Schillinger, Dean; Shiboski, Stephen

    2011-01-01

    Although randomized controlled trials are often a gold standard for determining intervention effects, in the area of practice-based research (PBR), there are many situations in which individual randomization is not possible. Alternative approaches to evaluating interventions have received increased attention, particularly those that can retain elements of randomization such that they can be considered "controlled" trials. Methodological design elements and practical implementation considerations for two quasi-experimental design approaches that have considerable promise in PBR settings--the stepped-wedge design, and a variant of this design, a wait-list cross-over design, are presented along with a case study from a recent PBR intervention for patients with diabetes. PBR-relevant design features include: creation of a cohort over time that collects control data but allows all participants (clusters or patients) to receive the intervention; staggered introduction of clusters; multiple data collection points; and one-way cross-over into the intervention arm. Practical considerations include: randomization versus stratification, training run in phases; and extended time period for overall study completion. Several design features of practice based research studies can be adapted to local circumstances yet retain elements to improve methodological rigor. Studies that utilize these methods, such as the stepped-wedge design and the wait-list cross-over design, can increase the evidence base for controlled studies conducted within the complex environment of PBR.

  11. The Use of Techniques of Sensory Evaluation as a Framework for Teaching Experimental Methods.

    ERIC Educational Resources Information Center

    Bennett, R.; Hamilton, M.

    1981-01-01

    Describes sensory assessment techniques and conditions for their satisfactory performance, including how they can provide open-ended exercises and advantages as relatively inexpensive and simple methods of teaching experimentation. Experiments described focus on diffusion of salt into potatoes after being cooked in boiled salted water. (Author/JN)

  12. The Use of Techniques of Sensory Evaluation as a Framework for Teaching Experimental Methods.

    ERIC Educational Resources Information Center

    Bennett, R.; Hamilton, M.

    1981-01-01

    Describes sensory assessment techniques and conditions for their satisfactory performance, including how they can provide open-ended exercises and advantages as relatively inexpensive and simple methods of teaching experimentation. Experiments described focus on diffusion of salt into potatoes after being cooked in boiled salted water. (Author/JN)

  13. Recent Progress in x3-Related Optical Process Experimental Technique. Raman Lasing

    NASA Technical Reports Server (NTRS)

    Matsko, A. B.; Savchenkov, Anatoliy A.; Strekalov, Dmitry; Maleki, Lute

    2006-01-01

    We describe theoretically and verify experimentally a simple technique for analyzing conversion efficiency and threshold of ail-resonant intracavity Raman lasers. The method is based on a dependence of the ring-down time of the pump cavity mode on the energy, accumulated in the cavity.

  14. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  15. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  16. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  17. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  18. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  19. Design and experimental verification of an improved magnetostrictive energy harvester

    NASA Astrophysics Data System (ADS)

    Germer, M.; Marschner, U.; Flatau, A. B.

    2017-04-01

    This paper summarizes and extends the modeling state of the art of magnetostrictive energy harvesters with a focus on the pick-up coil design. The harvester is a one-sided clamped galfenol unimorph loaded with two brass pieces each containing a permanent magnet to create a biased magnetic field. Measurements on different pick-up coils were conducted and compared with results from an analytic model. Resistance, mass and inductance were formulated and proved by measurements. Both the length for a constant number of turns and the number of turns for a constant coil length were also modeled and varied. The results confirm that the output voltage depends on the coil length for a constant number of turns and is higher for smaller coils. In contrast to a uniform magnetic field, the maximal output voltage is gained if the coil is placed not directly at but near the fixation. Two effects explain this behavior: Due to the permanent magnet next to the fixation, the magnetic force is higher and orientates the magnetic domains stronger. The clamping locally increases the stress and forces the magnetic domains to orientate, too. For that reason the material is stiffer and therefore the strain smaller. The tradeoff between a higher induced voltage in the coil and an increasing inductance and resistance for every additional turn are presented together with an experimental validation of the models. Based on the results guidelines are given to design an optimal coil which maximizes the output power for a given unimorph.

  20. Numerical and experimental design of coaxial shallow geothermal energy systems

    NASA Astrophysics Data System (ADS)

    Raghavan, Niranjan

    Geothermal Energy has emerged as one of the front runners in the energy race because of its performance efficiency, abundance and production competitiveness. Today, geothermal energy is used in many regions of the world as a sustainable solution for decreasing dependence on fossil fuels and reducing health hazards. However, projects related to geothermal energy have not received their deserved recognition due to lack of computational tools associated with them and economic misconceptions related to their installation and functioning. This research focuses on numerical and experimental system design analysis of vertical shallow geothermal energy systems. The driving force is the temperature difference between a finite depth beneath the earth and its surface stimulates continuous exchange of thermal energy from sub-surface to the surface (a geothermal gradient is set up). This heat gradient is captured by the circulating refrigerant and thus, tapping the geothermal energy from shallow depths. Traditionally, U-bend systems, which consist of two one-inch pipes with a U-bend connector at the bottom, have been widely used in geothermal applications. Alternative systems include coaxial pipes (pipe-in-pipe) that are the main focus of this research. It has been studied that coaxial pipes have significantly higher thermal performance characteristics than U-bend pipes, with comparative production and installation costs. This makes them a viable design upgrade to the traditional piping systems. Analytical and numerical heat transfer analysis of the coaxial system is carried out with the help of ABAQUS software. It is tested by varying independent parameters such as materials, soil conditions and effect of thermal contact conductance on heat transfer characteristics. With the above information, this research aims at formulating a preliminary theoretical design setup for an experimental study to quantify and compare the heat transfer characteristics of U-bend and coaxial

  1. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  2. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  3. Optimizing Two-level Supersaturated Designs using Swarm Intelligence Techniques

    PubMed Central

    Phoa, Frederick Kin Hing; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee

    2016-01-01

    Supersaturated designs (SSDs) are often used to reduce the number of experimental runs in screening experiments with a large number of factors. As more factors are used in the study, the search for an optimal SSD becomes increasingly challenging because of the large number of feasible selection of factor level settings. This paper tackles this discrete optimization problem via an algorithm based on swarm intelligence. Using the commonly used E(s2) criterion as an illustrative example, we propose an algorithm to find E(s2)–optimal SSDs by showing that they attain the theoretical lower bounds in Bulutoglu and Cheng (2004) and Bulutoglu (2007). We show that our algorithm consistently produces SSDs that are at least as efficient as those from the traditional CP exchange method in terms of computational effort, frequency of finding the E(s2)-optimal SSD and also has good potential for finding D3–, D4– and D5–optimal SSDs. PMID:27103752

  4. Game Design Narrative for Learning: Appropriating Adventure Game Design Narrative Devices and Techniques for the Design of Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2006-01-01

    The purpose of this conceptual analysis is to investigate how contemporary video and computer games might inform instructional design by looking at how narrative devices and techniques support problem solving within complex, multimodal environments. Specifically, this analysis presents a brief overview of game genres and the role of narrative in…

  5. A New Tour Design Technique to Enable an Enceladus Orbiter

    NASA Astrophysics Data System (ADS)

    Strange, N.; Campagnola, S.; Russell, R.

    2009-12-01

    As a result of discoveries made by the Cassini spacecraft, Saturn's moon Enceladus has emerged as a high science-value target for a future orbiter mission. [1] However, past studies of an Enceladus orbiter mission [2] found that entering Enceladus orbit either requires a prohibitively large orbit insertion ΔV (> 3.5 km/s) or a prohibitively long flight time. In order to reach Enceladus with a reasonable flight time and ΔV budget, a new tour design method has been developed that uses gravity-assists of the low-mass moons Rhea, Dione, and Tethys combined with v-infinity leveraging maneuvers. This new method can achieve Enceladus orbit with a combined leveraging and insertion ΔV of ~1 km/s and a 2.5 year Saturn tour. Among many challenges in designing a trajectory for an Enceladus mission, the two most prominent arise because Enceladus is a low mass moon (its GM is only ~7 km^2/s^2), deep within Saturn's gravity well (its orbit is at 4 Saturn radii). Designing ΔV-efficient rendezvous with Enceladus is the first challenge, while the second involves finding a stable orbit which can achieve the desired science measurements. A paper by Russell and Lara [3] has recently addressed the second problem, and a paper this past August by Strange, Campagnola, and Russell [4] has adressed the first. This method developed to solve the second problem, the leveraging tour, and the science possibilities of this trajectory will be the subject of this presentation. the new methods in [4], a leveraging tour with Titan, Rhea, Dione, and Tethys can reach Enceladus orbit with less than half of the ΔV of a direct Titan-Enceladus transfer. Starting from the TSSM Saturn arrival conditions [5], with a chemical bi-prop system, this new tour design technique could place into Enceladus orbit ~2800 kg compared to ~1100 kg from a direct Titan-Enceladus transfer. Moreover, the 2.5 year leveraging tour provides many low-speed and high science value flybys of Rhea, Dione, and Tethys. This exciting

  6. Regression-based techniques for statistical decision making in single-case designs.

    PubMed

    Manolov, Rumen; Arnau, Jaume; Solanas, Antonio; Bono, Roser

    2010-11-01

    The present study evaluates the performance of four methods for estimating regression coefficients used to make statistical decisions about intervention effectiveness in single-case designs. Ordinary least square estimation is compared to two correction techniques dealing with general trend and a procedure that eliminates autocorrelation whenever it is present. Type I error rates and statistical power are studied for experimental conditions defined by the presence or absence of treatment effect (change in level or in slope), general trend, and serial dependence. The results show that empirical Type I error rates do not approach the nominal ones in the presence of autocorrelation or general trend when ordinary and generalized least squares are applied. The techniques controlling trend show lower false alarm rates, but prove to be insufficiently sensitive to existing treatment effects. Consequently, the use of the statistical significance of the regression coefficients for detecting treatment effects is not recommended for short data series.

  7. Two point microstructure sensitive design and experimental verification

    NASA Astrophysics Data System (ADS)

    Gao, Xiang

    Rectangular models of material microstructure are described by their 1- and 2-point (spatial) correlation statistics of placement of local state. It is illustrated that generalized 2-point Hashin-Shtrikman bounds for elastic stiffness can be obtained that are linear in components of the correlation statistics. The concept of an eigen-microstructure within the microstructure hull is introduced. A method is developed for generating a sequence of archetypes of eigen-microstructure, from the 2-point correlation statistics of local state, assuming that the 1-point statistics are stationary. The method is illustrated by a case study. Extension of the first-order theory of microstructure design to considerations of morphological texture is addressed. It is shown that the correlation functions can be expressed in terms of an intermediate construct, called the texture function; the correlation functions have quadratic dependence in the texture functions. A complete (finite) texture hull is readily constructed for the texture functions in Fourier space, and is found to be a convex polytope. Eigen-texture functions occupy its corner (extreme) points. This gives rise to (combined) properties closures, from which second-order microstructure design can proceed. This is demonstrated in a brief case study. Experimental methods are introduced for obtaining two-point microstructure pair correlation functions in polycrystalline material. A particular tessellation of the fundamental zone of Euler angle space is described; individual orientations of the data set are binned into discrete tesserae. Elementary relationships between the two-point pair correlation functions and the grain size distribution and coherence length are explored. The one- and two-point distributions of orientation were recovered for three textures (as-received stainless steel, as-received copper and copper with cube texture). Elastic bounds for these textures are calculated including one-point bounds and

  8. Shedding light on the puzzle of drug-membrane interactions: Experimental techniques and molecular dynamics simulations.

    PubMed

    Lopes, Daniela; Jakobtorweihen, Sven; Nunes, Cláudia; Sarmento, Bruno; Reis, Salette

    2017-01-01

    Lipid membranes work as barriers, which leads to inevitable drug-membrane interactions in vivo. These interactions affect the pharmacokinetic properties of drugs, such as their diffusion, transport, distribution, and accumulation inside the membrane. Furthermore, these interactions also affect their pharmacodynamic properties with respect to both therapeutic and toxic effects. Experimental membrane models have been used to perform in vitro assessment of the effects of drugs on the biophysical properties of membranes by employing different experimental techniques. In in silico studies, molecular dynamics simulations have been used to provide new insights at an atomistic level, which enables the study of properties that are difficult or even impossible to measure experimentally. Each model and technique has its advantages and disadvantages. Hence, combining different models and techniques is necessary for a more reliable study. In this review, the theoretical backgrounds of these (in vitro and in silico) approaches are presented, followed by a discussion of the pharmacokinetic and pharmacodynamic properties of drugs that are related to their interactions with membranes. All approaches are discussed in parallel to present for a better connection between experimental and simulation studies. Finally, an overview of the molecular dynamics simulation studies used for drug-membrane interactions is provided. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-01

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  10. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  11. Conceptual design of fusion experimental reactor (FER/ITER)

    NASA Astrophysics Data System (ADS)

    Kimura, Haruyuki; Saigusa, Mikio; Saitoh, Yasushi

    1991-06-01

    Conceptual design of the Ion Cyclotron Wave (ICW) system for the FER and the Japanese contribution to the conceptual design of the International Thermonuclear Experimental Reactor (ITER) Ion Cyclotron Wave (ICW) system are presented. A frequency range of the FER ICW system is 50-85 MHz, which covers 2 omega (sub cT) heating, current drive by transit time magnetic pumping (TTMP) and 2 omega (sub cD) heating. Physics analyses show that the FER and the ITER ICW systems are suitable for the central ion heating and the burn control. The launching systems of the FER ICW system and the ITER high frequency ICW system are characterized by in-port plug and ridged-waveguide-fed 5x4 phased loop array. Merits of those systems are (1) a ceramic support is not necessary inside the cryostat and (2) remote maintenance of the front end part of the launcher is relatively easy. Overall structure of the launching system is consistent with radiation shielding, cooling, pumping, tritium safety and remote maintenance. The launcher has injection capability of 20 MW in the frequency range of 50-85 MHz with the separatrix-antenna distance of 15 cm and steep scrape-off density profile of H-mode. The shape of the ridged waveguide is optimized to provide desired frequency range and power handling capability with a finite element method. Matching between the current strap and the ridged waveguide is satisfactorily good. Thermal analysis of the Faraday shield shows that high electric conductivity low Z material such as beryllium should be chosen for a protection tile of the Faraday shield. A thick Faraday shield is necessary to tolerate electromagnetic force during disruptions. R and D needs for the ITER/FER ICW systems are identified and gain from JT-60/60U ICRF experiments and operations are indicated in connection with them.

  12. Experimental demonstration of a damage detection technique for nonlinear hysteretic structures

    NASA Astrophysics Data System (ADS)

    Yang, Jann N.; Xia, Ye; Loh, Chin-Hsiung

    2011-04-01

    Many civil and mechanical engineering structures exhibit nonlinear hysteretic behavior when subject to dynamic loads, such as earthquakes. The modeling and identification of non-linear hysteretic systems with stiffness and strength degradations is a practical but challenging problem encountered in the engineering field. A recently developed technique, referred to as the adaptive quadratic sum-square error with unknown inputs (AQSSE-UI), is capable of identifying time dependant parameters of nonlinear hysteretic structures. In this paper, the AQSSE-UI technique is applied to the parametric identification of nonlinear hysteretic reinforced concrete structures with stiffness and strength degradations, and the performance of the AQSSE technique is demonstrated by the experimental test data. A 1/3 scaled 2-story RC frame has been tested experimentally on the shake table at NCREE, Taiwan. This 2-story RC frame was subject to different levels of ground excitations back to back. The structure is firstly considered as an equivalent linear model with time-varying stiffness parameters, and the tracking of the degradation of the stiffness parameters is carried out using the AQSSE-UI technique. Then the same RC frame is considered as a nonlinear hysteretic model with inelastic hinges following the generalized Bouc-Wen model, and the time-varying nonlinear parameters are identified again using the AQSSE-UI technique. Experimental results demonstrate that the AQSSE technique is quite effective for the tracking of: (i) the stiffness degradation of linear structures, and (ii) the non-linear hysteretic parameters with stiffness and strength degradations.

  13. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  14. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  15. Experimental generation of longitudinally-modulated electron beams using an emittance exchange technique

    SciTech Connect

    Sun, Y.-E; Piot, P.; Johnson, A.; Lumpkin, A.; Maxwell, T.; Ruan, J.; Thurman-Keup, R.; /FERMILAB

    2010-08-01

    We report our experimental demonstration of longitudinal phase space modulation using a transverse-to-longitudinal emittance exchange technique. The experiment is carried out at the A0 photoinjector at Fermi National Accelerator Lab. A vertical multi-slit plate is inserted into the beamline prior to the emittance exchange, thus introducing beam horizontal profile modulation. After the emittance exchange, the longitudinal phase space coordinates (energy and time structures) of the beam are modulated accordingly. This is a clear demonstration of the transverse-to-longitudinal phase space exchange. In this paper, we present our experimental results on the measurement of energy profile as well as numerical simulations of the experiment.

  16. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGES

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; ...

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  17. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  18. Sparsely sampling the sky: a Bayesian experimental design approach

    NASA Astrophysics Data System (ADS)

    Paykari, P.; Jaffe, A. H.

    2013-08-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the Universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work, we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian experimental design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45 per cent. Conversely, investing the same amount of time as the original DES to observe a sparser but larger area of sky, we can in fact constrain the parameters with errors reduced by 28 per cent.

  19. Optimal experimental design with the sigma point method.

    PubMed

    Schenkendorf, R; Kremling, A; Mangold, M

    2009-01-01

    Using mathematical models for a quantitative description of dynamical systems requires the identification of uncertain parameters by minimising the difference between simulation and measurement. Owing to the measurement noise also, the estimated parameters possess an uncertainty expressed by their variances. To obtain highly predictive models, very precise parameters are needed. The optimal experimental design (OED) as a numerical optimisation method is used to reduce the parameter uncertainty by minimising the parameter variances iteratively. A frequently applied method to define a cost function for OED is based on the inverse of the Fisher information matrix. The application of this traditional method has at least two shortcomings for models that are nonlinear in their parameters: (i) it gives only a lower bound of the parameter variances and (ii) the bias of the estimator is neglected. Here, the authors show that by applying the sigma point (SP) method a better approximation of characteristic values of the parameter statistics can be obtained, which has a direct benefit on OED. An additional advantage of the SP method is that it can also be used to investigate the influence of the parameter uncertainties on the simulation results. The SP method is demonstrated for the example of a widely used biological model.

  20. Experimental Designs for Testing Differences in Survival Among Salmonid Populations.

    SciTech Connect

    Hoffman, Annette; Busack, Craig; Knudsen, Craig

    1994-11-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing habitat are limiting factors to population abundance of spring chinook salmon (Oncorhynchus tshawyacha). Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  1. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Experimental verification of pulse-probing technique for improving phase coherence grating lobe suppression.

    PubMed

    Torbatian, Zahra; Adamson, Rob; Brown, Jeremy A

    2013-07-01

    Fabrication of high-frequency phased-array ultrasound transducers is challenging because of the small element- to-element pitch required to avoid large grating lobes appearing in the field-of-view. Phase coherence imaging (PCI) was recently proposed as a highly effective technique to suppress grating lobes in large-pitch arrays for synthetic aperture beamforming. Our previous work proposed and theoretically validated a technique called pulse probing for improving grating lobe suppression when transmit beamforming is used with PCI. The present work reports the experimental verification of the proposed technique, in which the data was collected using a high-frequency ultrasound system and the processing was done offline. The data was collected with a 50-MHz, 256-element, 1.26 λ-pitch linear array, for which only the central 64-elements were used as the full aperture while the beam was steered to various angles. By sending a defocused pulse, the PCI weighting factors could be calculated, and were subsequently applied to the conventional transmit-receive beamforming. The experimental two-way radiation patterns showed that the grating lobe level was suppressed approximately 40 dB using the proposed technique, consistent with the theory. The suppression of overlapping grating lobes in reconstructed phased array images from multiple wire-phantoms in a water bath and tissue phantoms further validated the effectiveness of the proposed technique. The application of pulse probing along with PCI should simplify the fabrication of large-pitch phased arrays at high frequencies.

  3. Analytical and experimental evaluation of techniques for the fabrication of thermoplastic hologram storage devices

    NASA Technical Reports Server (NTRS)

    Rogers, J. W.

    1975-01-01

    The results of an experimental investigation on recording information on thermoplastic are given. A description was given of a typical fabrication configuration, the recording sequence, and the samples which were examined. There are basically three configurations which can be used for the recording of information on thermoplastic. The most popular technique uses corona which furnishes free charge. The necessary energy for deformation is derived from a charge layer atop the thermoplastic. The other two techniques simply use a dc potential in place of the corona for deformation energy.

  4. Solar Ion Sputter Deposition in the Lunar Regolith: Experimental Simulation Using Focused-Ion Beam Techniques

    NASA Technical Reports Server (NTRS)

    Christoffersen, R.; Rahman, Z.; Keller, L. P.

    2012-01-01

    As regions of the lunar regolith undergo space weathering, their component grains develop compositionally and microstructurally complex outer coatings or "rims" ranging in thickness from a few 10 s to a few 100's of nm. Rims on grains in the finest size fractions (e.g., <20 m) of mature lunar regoliths contain optically-active concentrations of nm size metallic Fe spherules, or "nanophase Fe(sup o)" that redden and attenuate optical reflectance spectral features important in lunar remote sensing. Understanding the mechanisms for rim formation is therefore a key part of connecting the drivers of mineralogical and chemical changes in the lunar regolith with how lunar terrains are observed to become space weathered from a remotely-sensed point of view. As interpreted based on analytical transmission electron microscope (TEM) studies, rims are produced from varying relative contributions from: 1) direct solar ion irradiation effects that amorphize or otherwise modify the outer surface of the original host grain, and 2) nanoscale, layer-like, deposition of extrinsic material processed from the surrounding soil. This extrinsic/deposited material is the dominant physical host for nanophase Fe(sup o) in the rims. An important lingering uncertainty is whether this deposited material condensed from regolith components locally vaporized in micrometeorite or larger impacts, or whether it formed as solar wind ions sputtered exposed soil and re-deposited the sputtered ions on less exposed areas. Deciding which of these mechanisms is dominant, or possibility exclusive, has been hampered because there is an insufficient library of chemical and microstructural "fingerprints" to distinguish deposits produced by the two processes. Experimental sputter deposition / characterization studies relevant to rim formation have particularly lagged since the early post-Apollo experiments of Hapke and others, especially with regard to application of TEM-based characterization techniques. Here

  5. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  6. Which is better for optimizing the biosorption process of lead - central composite design or the Taguchi technique?

    PubMed

    Azari, Ali; Mesdaghinia, Alireza; Ghanizadeh, Ghader; Masoumbeigi, Hossein; Pirsaheb, Meghdad; Ghafari, Hamid Reza; Khosravi, Touba; Sharafi, Kiomars

    2016-09-01

    The aim of this study is to evaluate central composite design (CCD) and the Taguchi technique in the adsorption process. Contact time, initial concentration, and pH were selected as the variables, and the removal efficiency of Pb was chosen for the designated response. In addition, face-centered CCD and the L9 orthogonal array were used for the experimental design. The result indicated that, at optimum conditions, the removal efficiency of Pb was 80%. However, the value of R(2) was greater than 0.95 for both the CCD and Taguchi techniques, which revealed that both techniques were suitable and in conformity with each other. Moreover, the results of analysis of variance and Prob > F < 0.05 showed the appropriate fit of the designated model with the experimental results. The probability of classifying the contributing variables by giving a percentage of the response quantity (Pb removal) made the Taguchi model an appropriate method for examining the effectiveness of different factors. pH was evaluated as the best input factor as it contributed 66.2% of Pb removal. The Taguchi technique was additionally confirmed by three-dimensional contour plots of CCD. Consequently, the Taguchi method with nine experimental runs and easy interaction plots is an appropriate substitute for CCD for several chemical engineering functions.

  7. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  8. An Investigation of Experimental Techniques for Obtaining Particulate Behavior in Metallized Solid Propellant Combustion

    DTIC Science & Technology

    1982-07-01

    experimental techniques have been used: a. High speed cinematography for the observation of strand burners within a combustion bomb and a 2-D slab...can help improve the latter. Other disadvantages of high speed cinematography include the image reso- lutions due to the optics employed and the film...Translation of the detector was done manually by 13j means of a micrometer. The glass beads were suspended in distilled water contained in a home

  9. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  10. Experimental design for the evaluation of struvite sedimentation obtained from an ammonium concentrated wastewater.

    PubMed

    Castro, Samuel Rodrigues; Araújo, Mahira Adna Cota; Lange, Liséte Celina

    2013-01-01

    Chemical precipitation of struvite as a technique of ammonium nitrogen (NH(4)-N) removal from concentrated wastewater has been shown to be an attractive alternative due to its high effectiveness, reaction rate, simplicity, environmental sustainability and, especially, the application potential of the generated solids for the fertilizer industry. The technique of experimental design has been used in order to identify and evaluate the optimum conditions of chemical precipitation reaction applied in a struvite sedimentation study. The preliminary tests were performed using synthetic effluent with a concentration equal to 500.0 mg N L(-1). The stoichiometric ratio Mg:NH(4):PO(4) equal to 1.5:1.0:1.25 and pH equal to 8.5 were taken to be the optimum conditions, where a NH(4)-N removal equal to 98.6% was achieved with only 10-min reaction time. This condition has been used to evaluate the struvite sedimentation from synthetic wastewaters, intending to check the optimum conditions achieved by the experimental design in different initial concentrations, 1,000 and 2,000 mg N L(-1). The results were typical of a good zonal sedimentation and can be used in the scale up the system.

  11. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  12. Design and sparing techniques to meet specified performance life

    NASA Technical Reports Server (NTRS)

    Holstead, A. J., Jr.

    1969-01-01

    Specified performance life technique starts with the general description of what is wanted, defines in block diagram the operational needs, and then defines the functional systems required. The technique is similar to a truncated reliability model, but the calculation is simplified by use of a Poisson distribution approach to failure probability.

  13. Experimental sensitivity analysis of a linearly stable thermoacoustic system via a pulsed forcing technique

    NASA Astrophysics Data System (ADS)

    Jamieson, Nicholas P.; Juniper, Matthew P.

    2017-09-01

    In this paper, we present the results of an experimental sensitivity analysis on a vertical electrically heated Rijke tube. We examine the shift in linear decay rates and frequencies of thermoacoustic oscillations, with and without control devices. To measure the decay rate, we wait for the system to reach a steady state and then excite it with an acoustic pulse from a loudspeaker. We identify the range of amplitudes over which the amplitude decays exponentially with time. In this range, the rate of change of the amplitude is linearly proportional to the amplitude, and we calculate the constant of proportionality, the linear decay rate, which can be compared with model predictions. The aim of this work is (i) to improve the experimental techniques implemented by Rigas et al. (J Fluid Mech 787, 2016), Jamieson et al. (Int J Spray Combust Dyn, 2016), using a technique inspired by Mejia et al. (Combust Flame 169:287-296, 2016), and (ii) to provide experimental data for future comparison with adjoint-based sensitivity analysis. Our experimental setup is automated and we can obtain thousands of decay rates in 1/12 the time of our previous method.

  14. Experimental study of liquid level gauge for liquid hydrogen using Helmholtz resonance technique

    NASA Astrophysics Data System (ADS)

    Nakano, Akihiro; Nishizu, Takahisa

    2016-07-01

    The Helmholtz resonance technique was applied to a liquid level gauge for liquid hydrogen to confirm the applicability of the technique in the cryogenic industrial field. A specially designed liquid level gauge that has a Helmholtz resonator with a small loudspeaker was installed in a glass cryostat. A swept frequency signal was supplied to the loudspeaker, and the acoustic response was detected by measuring the electrical impedance of the loudspeaker's voice coil. The penetration depth obtained from the Helmholtz resonance frequency was compared with the true value, which was read from a scale. In principle, the Helmholtz resonance technique is available for use with liquid hydrogen, however there are certain problems as regards practical applications. The applicability of the Helmholtz resonance technique to liquid hydrogen is discussed in this study.

  15. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model.

    PubMed

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.

  16. Experimental Design on Laminated Veneer Lumber Fiber Composite: Surface Enhancement

    NASA Astrophysics Data System (ADS)

    Meekum, U.; Mingmongkol, Y.

    2010-06-01

    Thick laminate veneer lumber(LVL) fibre reinforced composites were constructed from the alternated perpendicularly arrayed of peeled rubber woods. Glass woven was laid in between the layers. Native golden teak veneers were used as faces. In house formulae epoxy was employed as wood adhesive. The hand lay-up laminate was cured at 150° C for 45 mins. The cut specimen was post cured at 80° C for at least 5 hours. The 2k factorial design of experimental(DOE) was used to verify the parameters. Three parameters by mean of silane content in epoxy formulation(A), smoke treatment of rubber wood surface(B) and anti-termite application(C) on the wood surface were analysed. Both low and high levels were further subcategorised into 2 sub-levels. Flexural properties were the main respond obtained. ANOVA analysis of the Pareto chart was engaged. The main effect plot was also testified. The results showed that the interaction between silane quantity and termite treatment is negative effect at high level(AC+). Vice versa, the interaction between silane and smoke treatment was positive significant effect at high level(AB+). According to this research work, the optimal setting to improve the surface adhesion and hence flexural properties enhancement were high level of silane quantity, 15% by weight, high level of smoked wood layers, 8 out of 14 layers, and low anti termite applied wood. The further testes also revealed that the LVL composite had superior properties that the solid woods but slightly inferior in flexibility. The screw withdrawn strength of LVL showed the higher figure than solid wood. It is also better resistance to moisture and termite attack than the rubber wood.

  17. Design and experimental study on FBG hoop-strain sensor in pipeline monitoring

    NASA Astrophysics Data System (ADS)

    Ren, Liang; Jia, Zi-guang; Li, Hong-nan; Song, Gangbing

    2014-01-01

    Pipeline monitoring is an important task for the economic and safe operation of pipelines as well as for loss prevention and environmental protection. The circumferential strain is of significance in pipeline integrity monitoring. In this paper, an indirect pipeline corrosion monitoring method based on the circumferential strain measurement is firstly proposed, with main objectives at designing a circumferential strain measuring device. Combined with unique advantages of optical fiber sensing, an FBG hoop-strain sensor was designed and encapsulated. Its enhanced sensitivity mechanism in the circumferential strain measurement and manufacturing technique is detailed. The experimental study of the developed FBG hoop-strain sensor is conducted on a PVC model pipeline to investigate its characteristics, including reliability and some tentative dynamic tests. Results of model tests show that the FBG hoop-strain sensor demonstrates good performance in the circumferential strain measurement, and can be considered as a practical device for pipeline health monitoring.

  18. Experimental comparison between speckle and grating-based imaging technique using synchrotron radiation X-rays.

    PubMed

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal

    2016-08-08

    X-ray phase contrast and dark-field imaging techniques provide important and complementary information that is inaccessible to the conventional absorption contrast imaging. Both grating-based imaging (GBI) and speckle-based imaging (SBI) are able to retrieve multi-modal images using synchrotron as well as lab-based sources. However, no systematic comparison has been made between the two techniques so far. We present an experimental comparison between GBI and SBI techniques with synchrotron radiation X-ray source. Apart from the simple experimental setup, we find SBI does not suffer from the issue of phase unwrapping, which can often be problematic for GBI. In addition, SBI is also superior to GBI since two orthogonal differential phase gradients can be simultaneously extracted by one dimensional scan. The GBI has less stringent requirements for detector pixel size and transverse coherence length when a second or third grating can be used. This study provides the reference for choosing the most suitable technique for diverse imaging applications at synchrotron facility.

  19. Optimization of model parameters and experimental designs with the Optimal Experimental Design Toolbox (v1.0) exemplified by sedimentation in salt marshes

    NASA Astrophysics Data System (ADS)

    Reimer, J.; Schuerch, M.; Slawig, T.

    2015-03-01

    The geosciences are a highly suitable field of application for optimizing model parameters and experimental designs especially because many data are collected. In this paper, the weighted least squares estimator for optimizing model parameters is presented together with its asymptotic properties. A popular approach to optimize experimental designs called local optimal experimental designs is described together with a lesser known approach which takes into account the potential nonlinearity of the model parameters. These two approaches have been combined with two methods to solve their underlying discrete optimization problem. All presented methods were implemented in an open-source MATLAB toolbox called the Optimal Experimental Design Toolbox whose structure and application is described. In numerical experiments, the model parameters and experimental design were optimized using this toolbox. Two existing models for sediment concentration in seawater and sediment accretion on salt marshes of different complexity served as an application example. The advantages and disadvantages of these approaches were compared based on these models. Thanks to optimized experimental designs, the parameters of these models could be determined very accurately with significantly fewer measurements compared to unoptimized experimental designs. The chosen optimization approach played a minor role for the accuracy; therefore, the approach with the least computational effort is recommended.

  20. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  1. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  2. Simulation verification techniques study: Simulation self test hardware design and techniques report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.

  3. Is the linear modeling technique good enough for optimal form design? A comparison of quantitative analysis models.

    PubMed

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process.

  4. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    PubMed Central

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  5. Experimental research of the synthetic jet generator designs based on actuation of diaphragm with piezoelectric actuator

    NASA Astrophysics Data System (ADS)

    Rimasauskiene, R.; Matejka, M.; Ostachowicz, W.; Kurowski, M.; Malinowski, P.; Wandowski, T.; Rimasauskas, M.

    2015-01-01

    Experimental analyses of four own developed synthetic jet generator designs were presented in this paper. The main task of this work was to find the most appropriate design of the synthetic jet generator. Dynamic characteristics of the synthetic jet generator's diaphragm with piezoelectric material were measured using non-contact measuring equipment laser vibrometer Polytec®PSV 400. Temperatures of the piezoelectric diaphragms working in resonance frequency were measured with Fiber Bragg Grating (FBG) sensor. Experimental analysis of the synthetic jet generator amplitude-frequency characteristics were performed using CTA (hot wire anemometer) measuring techniques. Piezoelectric diaphragm in diameter of 27 mm was excited by sinusoidal voltage signal and it was fixed tightly inside the chamber of the synthetic jet generator. The number of the synthetic jet generator orifices (1 or 3) and volume of cavity (height of cavity vary from 0.5 mm to 1.5 mm) were changed. The highest value of the synthetic jet velocity 25 m/s was obtained with synthetic jet generator which has cavity 0.5 mm and 1 orifice (resonance frequency of the piezoelectric diaphragm 2.8 kHz). It can be concluded that this type of the design is preferred in order to get the peak velocity of the synthetic jet.

  6. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  7. Modeling NIF Experimental Designs with Adaptive Mesh Refinement and Lagrangian Hydrodynamics

    SciTech Connect

    Koniges, A E; Anderson, R W; Wang, P; Gunney, B N; Becker, R; Eder, D C; MacGowan, B J

    2005-08-31

    Incorporation of adaptive mesh refinement (AMR) into Lagrangian hydrodynamics algorithms allows for the creation of a highly powerful simulation tool effective for complex target designs with three-dimensional structure. We are developing an advanced modeling tool that includes AMR and traditional arbitrary Lagrangian-Eulerian (ALE) techniques. Our goal is the accurate prediction of vaporization, disintegration and fragmentation in National Ignition Facility (NIF) experimental target elements. Although our focus is on minimizing the generation of shrapnel in target designs and protecting the optics, the general techniques are applicable to modern advanced targets that include three-dimensional effects such as those associated with capsule fill tubes. Several essential computations in ordinary radiation hydrodynamics need to be redesigned in order to allow for AMR to work well with ALE, including algorithms associated with radiation transport. Additionally, for our goal of predicting fragmentation, we include elastic/plastic flow into our computations. We discuss the integration of these effects into a new ALE-AMR simulation code. Applications of this newly developed modeling tool as well as traditional ALE simulations in two and three dimensions are applied to NIF early-light target designs.

  8. Aberration Theory - A Spectrum Of Design Techniques For The Perplexed

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1986-10-01

    The early medieval scholar Maimonides wrote a famous book called "Guide for the Perplexed", which explained various thorny philosophical and religious questions for the benefit of the puzzled novice. I wish I had had such a person to guide me when I first started a career in lens design. There the novice is often struck by how much of an "art" this endeavor is. The best bet, for a beginner with no experience, should be to turn to optical aberration theory - which, in principle, should explain much of what goes into designing an optical system. Unfortunately, this subject is usually presented in the form of proofs and derivations, with little time spent on the practical implications of aberration theory. Furthermore, a new generation of lens designers, who grew up with the computer, often consider aberration theory as an unnecessary relic from the past. My career, by contrast, is based on the conviction that using the results of aberration theory is the only intelligent way to design optical systems. Computers are an invaluable aide, but we must, ultimately, bite the bullet and think. Along these lines, I have given several papers over the last few years which deal directly with the philosophy of lens design; the kind of guides for the perplexed that I wished I had had from the start. These papers include: "Lens design on a desert island - A simple method of optical design", "A modular method of optical design", "Optical design with air lenses", "Optical design with 'phantom' aspherics", "Optical design methods: your head as a personal computer", "Aberration theory and the meaning of life", and a paper at Innsbruck - "Some interesting correspondences in aberration theory". In all cases, the emphasis is on using your head to think, and the computer to help you out with the numerical work and the "fine-tuning" of a design. To hope that the computer will do the thinking for you is folly. Solutions gained by this route rarely equal the results of an experienced and

  9. Application of Soft Computing Techniques to Experimental Space Plasma Turbulence Observations - Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Bates, I.; Lawton, A.; Breikin, T.; Dunlop, M.

    Space Systems Group, University of Sheffield, U.K. Automatic Control and Systems Engineering, University of Sheffield, U.K. 3 Imperial College, London, U.K.A Genetic Algorithm (GA) approach is presented to solve a problem for turbulent space plasma system modelling in the form of Generalised Frequency Response Functions (GFRFs), using in-situ multi-satellite magnetic field measurements of the plasma turbulence. Soft Computing techniques have now been used for many years in Industry for nonlinear system identification. These techniques approach the problem of understanding a system, e.g. a chemical plant or a jet engine, by model structure selection and fitting parameters of the chosen model for the system using measured inputs and outputs of the system, which can then be used to determine physical characteristics of the system. GAs are one such technique that has been developed, providing essentially a series of solutions that evolve in a way to improve the model. Experimental space plasma turbulence studies have benefited from these System Identification techniques. Multi-point satellite observations provide input and output measurements of the turbulent plasma system. In previous work it was found natural to fit parameters to GFRFs, which derive from Volterra series and lead to quantitative measurements of linear wave-field growth and higher order wave-wave interactions. In previous work these techniques were applied using a Least Squares (LS) parameter fit. Results using GAs are compared to results obtained from the LS approach.

  10. Utilizing Project Management Techniques in the Design of Instructional Materials.

    ERIC Educational Resources Information Center

    Murphy, Charles

    1994-01-01

    Discussion of instructional design in large organizations highlights a project management approach. Topics addressed include the role of the instructional designer; project team selection; role of the team members; role of the project manager; focusing on what employees need to know; types of project teams; and monitoring time and responsibility.…

  11. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  12. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  13. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  14. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    PubMed

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  15. A new strategy in drug design of Chinese medicine: theory, method and techniques.

    PubMed

    Yang, Hong-Jun; Shen, Dan; Xu, Hai-Yu; Lu, Peng

    2012-11-01

    The research and development (R&D) process of Chinese medicine, with one notable feature, clinical application based, is significantly different from which of chemical and biological medicine, from laboratory research to clinics. Besides, compound prescription is another character. Therefore, according to different R&D theories between Chinese and Western medicine, we put forward a new strategy in drug design of Chinese medicine, which focuses on "combination-activity relationship (CAR)", taking prescription discovery, component identification and formula optimization as three key points to identify the drugs of high efficacy and low toxicity. The method of drug design of Chinese medicine includes: new prescription discovery based on clinical data and literature information, component identification based on computing and experimental research, as well as formula optimization based on system modeling. This paper puts forward the concept, research framework and techniques of drug design of Chinese medicine, which embodies the R&D model of Chinese medicine, hoping to support the drug design of Chinese medicine theoretically and technologically.

  16. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  17. Analytical and experimental studies of the helical magnetohydrodynamic thruster design

    SciTech Connect

    Gilbert, J.B. II; Lin, T.F.

    1994-12-31

    This paper describes the results of analytical and experimental studies of a helical magnetohydrodynamic (MHD) seawater thruster using a 8-Tesla (T) solenoid magnet. The application of this work is in marine vehicle propulsion. Analytical models are developed to predict the performance of the helical MHD thruster in a closed-loop condition. The analytical results are compared with experimental data and good agreement is obtained.

  18. Orbital maneuvering vehicle thermal design and analysis techniques

    NASA Astrophysics Data System (ADS)

    Chapter, J.

    1986-06-01

    This paper describes the OMV thermal design that is required to maintain components within temperature limits for all mission phases. A key element in the OMV thermal design is the application of a motorized thermal shade assembly that is a replacement for the more conventional variable conductance heat pipes or louvers. The thermal shade assembly covers equipment module radiator areas, and based upon the radiator temperature input to onboard computer, opens and closes the shade, varying the effective radiator area. Thermal design verification thermal analyses results are presented. Selected thermal analyses methods, including several unique subroutines, are discussed. A representation of enclosure Script F equations, in matrix form, is also included. Personal computer application to the development of the OMV thermal design is summarized.

  19. Cryogenic refractor design techniques. [for Infrared Astronomy Satellite

    NASA Technical Reports Server (NTRS)

    Darnell, R. J.

    1985-01-01

    The Infrared Astronomical Satellite (IRAS) was designed to operate at 2K, and over the spectral range of 8 to 120 micrometers. The focal plane is approximately 2 by 3 inches in size, and contains 62 individual field stop apertures, each with its own field lens, one or more filters and a detector. The design of the lenses involved a number of difficulties and challenges that are not usually encountered in optical design. Operating temperature is assumed during the design phase, which requires reliable information on dN/dT (Index Coefficient) for the materials. The optics and all supporting structures are then expanded to room temperature, which requires expansion coefficient data on the various materials, and meticulous attention to detail. The small size and dense packaging, as well as the high precision required, further contributed to the magnitude of the task.

  20. Dataflow Integration and Simulation Techniques for DSP System Design Tools

    DTIC Science & Technology

    2007-01-01

    the Autocoding Toolset. . . . . . . . . . . . . . . 56 5.8 The ported SAR system in Ptolemy II. . . . . . . . . . . . . . . . . . 57 5.9 SAR...simulation results in Ptolemy II and the Autocoding Toolset. . . 57 viii 6.1 DIF-to-C software synthesis framework. . . . . . . . . . . . . . . . . . 61 6.2 A...LabVIEW from National Instruments, and Ptolemy II from U.C. Berkeley, to name a few. In model-based design methodolo- gies, design representations in terms

  1. Study of an experimental technique for application to structural dynamic problems

    NASA Technical Reports Server (NTRS)

    Snell, R. F.

    1973-01-01

    An experimental program was conducted to determine the feasibility of using subscale plastic models to determine the response of full-scale aerospace structural components to impulsive, pyrotechnic loadings. A monocoque cylinder was impulsively loaded around the circumference of one end, causing a compressive stress wave to propagate in the axial direction. The resulting structural responses of two configurations of the cylinder (with and without a cutout) were recorded by photoelasticity, strain gages, and accelerometers. A maximum dynamic stress concentration was photoelastically determined and the accelerations calculated from strain-gage data were in good agreement with those recorded by accelerometers. It is concluded that reliable, quantitative structural response data can be obtained by the experimental techniques described in this report.

  2. Experimental investigation of liquid films in gravity-driven flows with a simple visualization technique

    NASA Astrophysics Data System (ADS)

    Njifenju, A. Kevin; Bico, José; Andrès, Emmanuelle; Jenffer, P.; Fermigier, Marc

    2013-05-01

    A visualization technique based on light absorption is used to monitor the thickness profile of a liquid film flowing on an inclined plane with high spatial and temporal resolutions. Surface waves are observed for a certain range of experimental parameters as expected from the classical stability analysis from Benjamin (J Fluid Mech 2:554-574, 1957). The liquid films are found systematically thicker than predicted by Nusselt (Z Ver Dtsch Ing 60:541, 1916) in the case of ideal viscous flows. We interpret this increase in thickness as a consequence of the propagation of waves on the films. The wave dynamics are in qualitative agreement with the asymptotic development from Anshus (Ind Eng Chem Fundam 11:502-508, 1972). Although the wavelength distribution is rather broad, space-time analysis indicates a well-defined phase velocity. Representing the wave velocity of a corrected Reynolds number allows to superimpose the experimental data into a single master curve.

  3. Experimental verification of a computational technique for determining ground reactions in human bipedal stance.

    PubMed

    Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2007-01-01

    We have developed a three-dimensional (3D) biomechanical model of human standing that enables us to study the mechanisms of posture and balance simultaneously in various directions in space. Since the two feet are on the ground, the system defines a kinematically closed-chain which has redundancy problems that cannot be resolved using the laws of mechanics alone. We have developed a computational (optimization) technique that avoids the problems with the closed-chain formulation thus giving users of such models the ability to make predictions of joint moments, and potentially, muscle activations using more sophisticated musculoskeletal models. This paper describes the experimental verification of the computational technique that is used to estimate the ground reaction vector acting on an unconstrained foot while the other foot is attached to the ground, thus allowing human bipedal standing to be analyzed as an open-chain system. The computational approach was verified in terms of its ability to predict lower extremity joint moments derived from inverse dynamic simulations performed on data acquired from four able-bodied volunteers standing in various postures on force platforms. Sensitivity analyses performed with model simulations indicated which ground reaction force (GRF) and center of pressure (COP) components were most critical for providing better estimates of the joint moments. Overall, the joint moments predicted by the optimization approach are strongly correlated with the joint moments computed using the experimentally measured GRF and COP (0.78 < or = r(2) < or = 0.99,median,0.96) with a best-fit that was not statistically different from a straight line with unity slope (experimental=computational results) for postures of the four subjects examined. These results indicate that this model-based technique can be relied upon to predict reasonable and consistent estimates of the joint moments using the predicted GRF and COP for most standing postures.

  4. Study Design Rigor in Animal-Experimental Research Published in Anesthesia Journals.

    PubMed

    Hoerauf, Janine M; Moss, Angela F; Fernandez-Bustamante, Ana; Bartels, Karsten

    2017-02-08

    Lack of reproducibility of preclinical studies has been identified as an impediment for translation of basic mechanistic research into effective clinical therapies. Indeed, the National Institutes of Health has revised its grant application process to require more rigorous study design, including sample size calculations, blinding procedures, and randomization steps. We hypothesized that the reporting of such metrics of study design rigor has increased over time for animal-experimental research published in anesthesia journals. PubMed was searched for animal-experimental studies published in 2005, 2010, and 2015 in primarily English-language anesthesia journals. A total of 1466 publications were graded on the performance of sample size estimation, randomization, and blinding. Cochran-Armitage test was used to assess linear trends over time for the primary outcome of whether or not a metric was reported. Interrater agreement for each of the 3 metrics (power, randomization, and blinding) was assessed using the weighted κ coefficient in a 10% random sample of articles rerated by a second investigator blinded to the ratings of the first investigator. A total of 1466 manuscripts were analyzed. Reporting for all 3 metrics of experimental design rigor increased over time (2005 to 2010 to 2015): for power analysis, from 5% (27/516), to 12% (59/485), to 17% (77/465); for randomization, from 41% (213/516), to 50% (243/485), to 54% (253/465); and for blinding, from 26% (135/516), to 38% (186/485), to 47% (217/465). The weighted κ coefficients and 98.3% confidence interval indicate almost perfect agreement between the 2 raters beyond that which occurs by chance alone (power, 0.93 [0.85, 1.0], randomization, 0.91 [0.85, 0.98], and blinding, 0.90 [0.84, 0.96]). Our hypothesis that reported metrics of rigor in animal-experimental studies in anesthesia journals have increased during the past decade was confirmed. More consistent reporting, or explicit justification for absence

  5. Experimental Technique and Assessment for Measuring the Convective Heat Transfer Coefficient from Natural Ice Accretions

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. Cyril; Vanfossen, G. James, Jr.; Dewitt, Kenneth J.; Dukhan, Nihad

    1995-01-01

    A technique was developed to cast frozen ice shapes that had been grown on a metal surface. This technique was applied to a series of ice shapes that were grown in the NASA Lewis Icing Research Tunnel on flat plates. Nine flat plates, 18 inches square, were obtained from which aluminum castings were made that gave good ice shape characterizations. Test strips taken from these plates were outfitted with heat flux gages, such that when placed in a dry wind tunnel, can be used to experimentally map out the convective heat transfer coefficient in the direction of flow from the roughened surfaces. The effects on the heat transfer coefficient for both parallel and accelerating flow will be studied. The smooth plate model verification baseline data as well as one ice roughened test case are presented.

  6. Dynamic Light Scattering Microscopy. A Novel Optical Technique to Image Submicroscopic Motions. II: Experimental Applications

    PubMed Central

    Dzakpasu, Rhonda; Axelrod, Daniel

    2004-01-01

    An experimental verification of an optical microscope technique to create spatial map images of dynamically scattered light fluctuation decay rates is presented. The dynamic light scattering microscopy technique is demonstrated on polystyrene beads and living macrophage cells. With a slow progressive scan charge-coupled device camera employed in a streak-like mode, rapid intensity fluctuations with timescales the order of milliseconds can be recorded from these samples. From such streak images, the autocorrelation function of these fluctuations can be computed at each location in the sample. The characteristic decay times of the autocorrelation functions report the rates of motion of scattering centers. These rates show reasonable agreement to theoretically expected values for known samples with good signal/noise ratio. The rates can be used to construct an image-like spatial map of the rapidity of submicroscopic motions of scattering centers. PMID:15298931

  7. New experimental method for lidar overlap factor using a CCD side-scatter technique.

    PubMed

    Wang, Zhenzhu; Tao, Zongming; Liu, Dong; Wu, Decheng; Xie, Chenbo; Wang, Yingjian

    2015-04-15

    In theory, lidar overlap factor can be derived from the difference between the particle backscatter coefficient retrieved from lidar elastic signal without overlap correction and the actual particle backscatter coefficient, which can be obtained by other measured techniques. The side-scatter technique using a CCD camera is testified to be a powerful tool to detect the particle backscatter coefficient in near ground layer during night time. A new experiment approach to determine the overlap factor for vertically pointing lidar is presented in this study, which can be applied to Mie lidars. The effect of overlap factor on Mie lidar is corrected by an iteration algorithm combining the retrieved particle backscatter coefficient using CCD side-scatter method and Fernald method. This method has been successfully applied to Mie lidar measurements during a routine campaign, and the comparison of experimental results in different atmosphere conditions demonstrated that this method is available in practice.

  8. Comparison of Quadrapolar™ radiofrequency lesions produced by standard versus modified technique: an experimental model

    PubMed Central

    Safakish, Ramin

    2017-01-01

    Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%–30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques. PMID:28652802

  9. A new experimental device to evaluate eye ulcers using a multispectral electrical impedance technique

    NASA Astrophysics Data System (ADS)

    Bellotti, Mariela I.; Bast, Walter; Berra, Alejandro; Bonetto, Fabián J.

    2011-07-01

    We present a novel experimental technique to determine eye ulcers in animals using a spectral electrical impedance technique. We expect that this technique will be useful in dry eye syndrome. We used a sensor that is basically a platinum (Pt) microelectrode electrically insulated by glass from a cylindrical stainless steel counter-electrode. This sensor was applied to the naked eye of New Zealand rabbits (2.0-3.5 kg in weight). Whereas half of the eyes were normal (control), we applied to the remainder a few drops of 20% (v/v) alcohol to produce an ulcer in the eye. Using a multispectral electrical impedance system we measured ulcerated and control eyes and observed significant difference between normal and pathological samples. We also investigated the effects of different applied pressures and natural degradation of initially normal eyes as a function of time. We believe that this technique could be sufficiently sensitive and repetitive to help diagnose ocular surface diseases such as dry eye syndrome.

  10. Simulated and experimental technique optimization of dual-energy radiography: abdominal imaging applications

    NASA Astrophysics Data System (ADS)

    Sabol, John M.; Wheeldon, Samuel J.; Jabri, Kadri N.

    2006-03-01

    With growing clinical acceptance of dual-energy chest radiography, there is increased interest in the application of dual-energy techniques to other clinical areas. This paper describes the creation and experimental validation of a poly-energetic signal-propagation model for technique optimization of new dual-energy clinical applications. The model is verified using phantom experiments simulating typical abdominal radiographic applications such as Intravenous Urography (IVU) and the detection of pelvic and sacral bone lesions or kidney stones in the presence of bowel gas. The model is composed of a spectral signal propagation component and an image-processing component. The spectral propagation component accepts detector specifications, X-ray spectra, phantom and imaging geometry as inputs, and outputs the detected signal and estimated noise. The image-processing module performs dual-energy logarithmic subtraction and returns figures-of-merit such as contrast and contrast-to-noise ratio (CNR), which are evaluated in conjunction with Monte Carlo calculations of dose. Phantoms assembled from acrylic, aluminum, and iodinated contrast-agent filled tubes were imaged using a range of kVp's and dose levels. Simulated and experimental results were compared by dose, clinical suitability, and system limitations in order to yield technique recommendations that optimize one or more figures-of-merit. The model accurately describes phantom images obtained in a low scatter environment. For the visualization of iodinated vessels in the abdomen and the detection of pelvic bone lesions, both simulated and experimental results indicate that dual-energy techniques recommended by the model yield significant improvements in CNR without significant increases in patient dose as compared to conventional techniques. For example the CNR of iodinated vessels can be doubled using two-thirds of the dose of a standard exam. Alternatively, in addition to a standard dose image, the clinician can

  11. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  12. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  13. Laboratory prototype of cochlear implant: design and techniques.

    PubMed

    Ali, Hussnain; Ahmad, Talha J; Ajaz, Asim; Khan, Shoab A

    2009-01-01

    This paper presents design overview of a low cost prototype of Cochlear Implant developed from commercial off-the-shelf components. Design scope includes speech processing module implemented on a commercial digital signal processor, transcutaneous data and power transceiver developed from a single pair of inductive coils and finally a stimulator circuitry for cochlear stimulation. Different speech processing strategies such as CIS, SMSP and F0/F1 have been implemented and tested using a novel, indigenously developed speech processing research module which evaluates the performance of speech processing strategies in software, hardware and practical scenarios. Design overview, simulations and practical results of an optimized inductive link using Class E Power Amplifier are presented. Link was designed at a carrier frequency of 2.5MHz for 100mW output power. Receiver logic design and stimulator circuitry was implemented using a PIC microcontroller and off-the-shelf electronic components. Results indicate 40% link efficiency with 128kbps data transfer rate. This low cost prototype can be used for undertaking cochlear implant research in laboratories.

  14. Findings in Experimental Psychology as Functioning Principles of Theatrical Design.

    ERIC Educational Resources Information Center

    Caldwell, George

    A gestalt approach to theatrical design seems to provide some ready and stable explanations for a number of issues in the scenic arts. Gestalt serves as the theoretical base for a number of experiments in psychology whose findings appear to delineate the principles of art to be used in scene design. The fundamental notion of gestalt theory…

  15. The experimental design of the Missouri Ozark Forest Ecosystem Project

    Treesearch

    Steven L. Sheriff; Shuoqiong. He

    1997-01-01

    The Missouri Ozark Forest Ecosystem Project (MOFEP) is an experiment that examines the effects of three forest management practices on the forest community. MOFEP is designed as a randomized complete block design using nine sites divided into three blocks. Treatments of uneven-aged, even-aged, and no-harvest management were randomly assigned to sites within each block...

  16. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  17. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  18. Optimal experimental design for a nonlinear response in environmental toxicology.

    PubMed

    Wright, Stephen E; Bailer, A John

    2006-09-01

    A start-stop experiment in environmental toxicology provides a backdrop for this design discussion. The basic problem is to decide when to sample a nonlinear response in order to minimize the generalized variance of the estimated parameters. An easily coded heuristic optimization strategy can be applied to this problem to obtain optimal or nearly optimal designs. The efficiency of the heuristic approach allows a straightforward exploration of the sensitivity of the suggested design with respect to such problem-specific concerns as variance heterogeneity, time-grid resolution, design criteria, and interval specification of planning values for parameters. A second illustration of design optimization is briefly presented in the context of concentration spacing for a reproductive toxicity study.

  19. The International Thermonuclear Experimental Reactor (ITER): Design and materials selection

    SciTech Connect

    Summers, L.T.; Miller, J.R.; Heim, J.R.

    1989-08-08

    The success of ITER relies on aggressive design of the superconducting magnet systems. This design emphasized high radiation-damage tolerance, acceptance of high nuclear heat loads, and high operational stresses in the Toroidal Field (TF) magnets. The design of the Central Solenoid (CS) magnets, although they will be well shielded from the plasma, is equally aggressive due to the need for very high magnetic fields (14 T) and long term operation at high cyclic stresses. Success of these magnet designs depends, in part, on sound selection and fabrication of materials for structural, superconducting, and insulating components. Here we review the design of ITER and the selection of structural materials for some of the systems that will operate at cryogenic temperatures. In addition we will introduce some of the data that the materials selection is based on and suggest opportunities for future research in support of ITER. 10 refs., 1 fig., 4 tabs.

  20. Robust control design techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay; Bachmann, Glen R.

    1994-01-01

    In this paper, an active flutter suppression problem is studied for a thin airfoil in unsteady aerodynamics. The mathematical model of this system is infinite dimensional because of Theodorsen's function which is irrational. Several second order approximations of Theodorsen's function are compared. A finite dimensional model is obtained from such an approximation. We use H infinity control techniques to find a robustly stabilizing controller for active flutter suppression.

  1. Joint Tactics, Techniques, and Procedures for Laser Designation Operations

    DTIC Science & Technology

    2007-11-02

    0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response, including the time for reviewing...collection of information if it does not display a currently valid OMB control number. 1 . REPORT DATE 28 MAY 1999 2. REPORT TYPE N/A 3. DATES...unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 PREFACE i 1 . Scope This publication provides joint tactics, techniques

  2. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  3. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  4. Music and video iconicity: theory and experimental design.

    PubMed

    Kendall, Roger A

    2005-01-01

    Experimental studies on the relationship between quasi-musical patterns and visual movement have largely focused on either referential, associative aspects or syntactical, accent-oriented alignments. Both of these are very important, however, between the referential and areferential lays a domain where visual pattern perceptually connects to musical pattern; this is iconicity. The temporal syntax of accent structures in iconicity is hypothesized to be important. Beyond that, a multidimensional visual space connects to musical patterning through mapping of visual time/space to musical time/magnitudes. Experimental visual and musical correlates are presented and comparisons to previous research provided.

  5. Advanced study techniques: tools for HVDC systems design

    SciTech Connect

    Degeneff, R.C.

    1984-01-01

    High voltage direct current (HVDC) transmission systems, which offer functional as well as environmental and economic advantages, could see a 15% growth rate over the next decade. Design studies of HVDC system components are complicated by the need to cover 11 major elements: power system, insulation coordination, filter design, subsynchronous torsional interaction, circuit breaker requirements, power line carrier and radio interference, electric fields and audible noise, protective relaying, availability and reliability, efficiency, equipment specification, and HVDC simulator and Transient Network Analyzers. The author summarizes and illustrates each element. 6 figures, 1 table.

  6. Design Techniques for Radiation Hardened Phase-Locked Loops

    DTIC Science & Technology

    2005-08-23

    Nemmani, M. Vandepas , K. Okk, K. Mayaram, and U. Moon, “Radiation hard PLL design tolerant to noise and process variations,” in CDADIC report, July...2004. [8] M. Vandepas , K. Ok, A. N. Nemmani, M. Brownlee, K. Mayaram, and U.-K. Moon, “Characterization of 1.2GHz phase locked loops and voltage...controlled oscillators in a total dose radiation environment,” in Proceedings of 2005 MAPLD International Conference, Sept. 2005. [9] M. Vandepas , “Design of

  7. Respiratory protective device design using control system techniques

    NASA Technical Reports Server (NTRS)

    Burgess, W. A.; Yankovich, D.

    1972-01-01

    The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.

  8. Time-Domain Optimal Experimental Design in Human Seated Postural Control Testing.

    PubMed

    Cody Priess, M; Choi, Jongeun; Radcliffe, Clark; Popovich, John M; Cholewicki, Jacek; Peter Reeves, N

    2015-05-01

    We are developing a series of systems science-based clinical tools that will assist in modeling, diagnosing, and quantifying postural control deficits in human subjects. In line with this goal, we have designed and constructed a seated balance device and associated experimental task for identification of the human seated postural control system. In this work, we present a quadratic programming (QP) technique for optimizing a time-domain experimental input signal for this device. The goal of this optimization is to maximize the information present in the experiment, and therefore its ability to produce accurate estimates of several desired seated postural control parameters. To achieve this, we formulate the problem as a nonconvex QP and attempt to locally maximize a measure (T-optimality condition) of the experiment's Fisher information matrix (FIM) under several constraints. These constraints include limits on the input amplitude, physiological output magnitude, subject control amplitude, and input signal autocorrelation. Because the autocorrelation constraint takes the form of a quadratic constraint (QC), we replace it with a conservative linear relaxation about a nominal point, which is iteratively updated during the course of optimization. We show that this iterative descent algorithm generates a convergent suboptimal solution that guarantees monotonic nonincreasing of the cost function value while satisfying all constraints during iterations. Finally, we present successful experimental results using an optimized input sequence.

  9. Experimental Guidelines for Studies Designed to Investigate the Impact of Antioxidant Supplementation on Exercise Performance

    PubMed Central

    Powers, Scott K.; Smuder, Ashley J.; Kavazis, Andreas N.; Hudson, Matthew B.

    2010-01-01

    Research interest in the effects of antioxidants on exercise-induced oxidative stress and human performance continues to grow as new scientists enter this field. Consequently, there is a need to establish an acceptable set of criteria for monitoring antioxidant capacity and oxidative damage in tissues. Numerous reports have described a wide range of assays to detect both antioxidant capacity and oxidative damage to biomolecules, but many techniques are not appropriate in all experimental conditions. Here, the authors present guidelines for selecting and interpreting methods that can be used by scientists to investigate the impact of antioxidants on both exercise performance and the redox status of tissues. Moreover, these guidelines will be useful for reviewers who are assigned the task of evaluating studies on this topic. The set of guidelines contained in this report is not designed to be a strict set of rules, because often the appropriate procedures depend on the question being addressed and the experimental model. Furthermore, because no individual assay is guaranteed to be the most appropriate in every experimental situation, the authors strongly recommend using multiple assays to verify a change in biomarkers of oxidative stress or redox balance. PMID:20190346

  10. Parameter Space Techniques for Robust Control System Design.

    DTIC Science & Technology

    1980-07-01

    been further investi- gated by Cruz [2] and Desoer and Wang [3]. In frequency design methods the concept to compensate the loop, such that high gains...of Feedback Systems, McGraw-Hill, New York, 1972. 3. C. A. Desoer and Y. T. Wang, "Foundations of Feedback Theory for Nonlinear Dynamical Systems

  11. Teaching by Design: Tools and Techniques to Improve Instruction

    ERIC Educational Resources Information Center

    Burke, Jim

    2015-01-01

    The Common Core State Standards (CCSS) and other state standards have challenged teachers to rethink how they plan units and design their assignments within constraints of time and increasingly diverse classrooms. This article describes the author's efforts to create a coherent, useable set of tools to make his teaching at the unit and daily…

  12. Integration of Risk Management Techniques into Outdoor Adventure Program Design.

    ERIC Educational Resources Information Center

    Bruner, Eric V.

    This paper is designed to acquaint the outdoor professional with the risk management decision making process required for the operation and management of outdoor adventure activities. The document examines the programming implications of fear in adventure activities; the risk management process in adventure programming; a definition of an…

  13. Teaching by Design: Tools and Techniques to Improve Instruction

    ERIC Educational Resources Information Center

    Burke, Jim

    2015-01-01

    The Common Core State Standards (CCSS) and other state standards have challenged teachers to rethink how they plan units and design their assignments within constraints of time and increasingly diverse classrooms. This article describes the author's efforts to create a coherent, useable set of tools to make his teaching at the unit and daily…

  14. Association mapping: critical considerations shift from genotyping to experimental design

    USDA-ARS?s Scientific Manuscript database

    The goal of many plant scientists’ research is to explain natural phenotypic variation in terms of simple changes in DNA sequence. Traditionally, linkage mapping has been the most commonly employed method to reach this goal: experimental crosses are made to generate a family with known relatedness ...

  15. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  16. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  17. An experimental investigation of the spray issued from a pMDI using laser diagnostic techniques.

    PubMed

    Dunbar, C A; Watkins, A P; Miller, J F

    1997-01-01

    This research was concerned with the experimental investigation of the spray issued from a pressurised metered-dose inhaler (pMDI) using laser diagnostic techniques and has been motivated by the urgent need to find suitable replacements to the environmentally destructive CFC propellants currently used in the device. The experimental work was conducted using phase-Doppler particle analysis (PDPA), a single particle light scattering technique that provides the simultaneous measurement of drop size, velocity, and concentration, yielding the most detailed temporal and spatial analysis of the pMDI spray to date. Three formulations were studied to compare the performance of an "ozone-friendly" hydrofluoroalkane propellant against that of a traditional CFC propellant mixture and a commercially available CFC formulation containing drug and surfactant. The PDPA analysis was complemented by a visual investigation of the near-orifice flow field using copper laserstrobe microcinematography to obtain information on the primary atomization process of the pMDI. This work was conducted in parallel with the theoretical investigation of the spray issued from a pMDI.

  18. Experimental and Imaging Techniques for Examining Fibrin Clot Structures in Normal and Diseased States

    PubMed Central

    Fan, Natalie K.; Keegan, Philip M.; Platt, Manu O.; Averett, Rodney D.

    2015-01-01

    Fibrin is an extracellular matrix protein that is responsible for maintaining the structural integrity of blood clots. Much research has been done on fibrin in the past years to include the investigation of synthesis, structure-function, and lysis of clots. However, there is still much unknown about the morphological and structural features of clots that ensue from patients with disease. In this research study, experimental techniques are presented that allow for the examination of morphological differences of abnormal clot structures due to diseased states such as diabetes and sickle cell anemia. Our study focuses on the preparation and evaluation of fibrin clots in order to assess morphological differences using various experimental assays and confocal microscopy. In addition, a method is also described that allows for continuous, real-time calculation of lysis rates in fibrin clots. The techniques described herein are important for researchers and clinicians seeking to elucidate comorbid thrombotic pathologies such as myocardial infarctions, ischemic heart disease, and strokes in patients with diabetes or sickle cell disease. PMID:25867016

  19. Application of Monte Carlo technique to time-resolved transillumination: a comparison with experimental data

    NASA Astrophysics Data System (ADS)

    Scampoli, Paola; Curto, C. A.; Guida, Giovanni; Roberti, Giuseppe

    1998-01-01

    The growing number of laser applications in medicine and biology has determined a renewed interest on the study of the light transport in turbid media such as biological tissues. One of the most powerful methods used to describe this kind of process is given by the Monte Carlo techniques. We have developed a FORTRAN90 code, running on an Alpha Vax AXP DEC 2100 to simulate the transport of a photon beam with a Gaussian temporal and spatial profile through a multilayered sample. The code provides the sample transmittance and reflectance (both time and space resolved) that can be compared to the experimental data. Monte Carlo calculations have been performed to simulate time-resolved transillumination through water latex and intralipid water solutions with optical properties similar to those of biological tissues. The comparison of Monte Carlo results with experimental data and with analytical solutions to diffusion equation shows a good agreement, suggesting that Monte Carlo techniques are indeed a powerful tool for predictions on light transport in turbid media.

  20. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, Louis J.

    1989-01-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.