Science.gov

Sample records for experimental design techniques

  1. Image processing of correlated data by experimental design techniques

    SciTech Connect

    Stern, D.

    1987-01-01

    New classes of algorithms are developed for processing of two-dimensional image data imbedded in correlated noise. The algorithms are based on modifications of standard analysis of variance (ANOVA) techniques ensuring their proper operation in dependent noise. The approach taken in the development of procedures is deductive. First, the theory of modified ANOVA (MANOVA) techniques involving one- and two-way layouts are considered for noise models with autocorrelation matrix (ACM) formed by direct multiplication of rows and columns or tensored correlation matrices (TCM) stressing the special case of the first-order Markov process. Next, the techniques are generalized to include arbitrary, wide-sense stationary (WSS) processes. This permits dealing with diagonal masks which have ACM of a general form even for TCM. As further extension, the theory of Latin square (LS) masks is generalized to include dependent noise with TCM. This permits dealing with three different effects of m levels using only m{sup 2} observations rather than m{sup 3}. Since in many image-processing problems, replication of data is possible, the masking techniques are generalized to replicated data for which the replication is TCM dependent. For all procedures developed, algorithms are implemented which ensure real-time processing of images.

  2. Taking evolutionary circuit design from experimentation to implementation: some useful techniques and a silicon demonstration

    NASA Technical Reports Server (NTRS)

    Stoica, A.; Zebulum, R. S.; Guo, X.; Keymeulen, D.; Ferguson, M. I.; Duong, V.

    2004-01-01

    Current techniques in evolutionary synthesis of analogue and digital circuits designed at transistor level have focused on achieving the desired functional response, without paying sufficient attention to issues needed for a practical implementation of the resulting solution. No silicon fabrication of circuits with topologies designed by evolution has been done before, leaving open questions on the feasibility of the evolutionary circuit design approach, as well as on how high-performance, robust, or portable such designs could be when implemented in hardware. It is argued that moving from evolutionary 'design-for experimentation' to 'design-for-implementation' requires, beyond inclusion in the fitness function of measures indicative of circuit evaluation factors such as power consumption and robustness to temperature variations, the addition of certain evaluation techniques that are not common in conventional design. Several such techniques that were found to be useful in evolving designs for implementation are presented; some are general, and some are particular to the problem domain of transistor-level logic design, used here as a target application. The example used here is a multifunction NAND/NOR logic gate circuit, for which evolution obtained a creative circuit topology more compact than what has been achieved by multiplexing a NAND and a NOR gate. The circuit was fabricated in a 0.5 mum CMOS technology and silicon tests showed good correspondence with the simulations.

  3. Optimization and enhancement of soil bioremediation by composting using the experimental design technique.

    PubMed

    Sayara, Tahseen; Sarrà, Montserrat; Sánchez, Antoni

    2010-06-01

    The objective of this study was the application of the experimental design technique to optimize the conditions for the bioremediation of contaminated soil by means of composting. A low-cost material such as compost from the Organic Fraction of Municipal Solid Waste as amendment and pyrene as model pollutant were used. The effect of three factors was considered: pollutant concentration (0.1-2 g/kg), soil:compost mixing ratio (1:0.5-1:2 w/w) and compost stability measured as respiration index (0.78, 2.69 and 4.52 mg O2 g(-1) Organic Matter h(-1)). Stable compost permitted to achieve an almost complete degradation of pyrene in a short time (10 days). Results indicated that compost stability is a key parameter to optimize PAHs biodegradation. A factor analysis indicated that the optimal conditions for bioremediation after 10, 20 and 30 days of process were (1.4, 0.78, 1:1.4), (1.4, 2.18. 1:1.3) and (1.3, 2.18, 1:1.3) for concentration (g/kg), compost stability (mg O2 g(-1) Organic Matter h(-1)) and soil:compost mixing ratio, respectively.

  4. Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.

    ERIC Educational Resources Information Center

    Stallings, William M.

    1993-01-01

    Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)

  5. Experimental evaluation of shape memory alloy actuation technique in adaptive antenna design concepts

    NASA Technical Reports Server (NTRS)

    Kefauver, W. Neill; Carpenter, Bernie F.

    1994-01-01

    Creation of an antenna system that could autonomously adapt contours of reflecting surfaces to compensate for structural loads induced by a variable environment would maximize performance of space-based communication systems. Design of such a system requires the comprehensive development and integration of advanced actuator, sensor, and control technologies. As an initial step in this process, a test has been performed to assess the use of a shape memory alloy as a potential actuation technique. For this test, an existing, offset, cassegrain antenna system was retrofit with a subreflector equipped with shape memory alloy actuators for surface contour control. The impacts that the actuators had on both the subreflector contour and the antenna system patterns were measured. The results of this study indicate the potential for using shape memory alloy actuation techniques to adaptively control antenna performance; both variations in gain and beam steering capabilities were demonstrated. Future development effort is required to evolve this potential into a useful technology for satellite applications.

  6. Axisymmetric and non-axisymmetric exhaust jet induced effects on a V/STOL vehicle design. Part 3: Experimental technique

    NASA Technical Reports Server (NTRS)

    Schnell, W. C.

    1982-01-01

    The jet induced effects of several exhaust nozzle configurations (axisymmetric, and vectoring/modulating varients) on the aeropropulsive performance of a twin engine V/STOL fighter design was determined. A 1/8 scale model was tested in an 11 ft transonic tunnel at static conditions and over a range of Mach Numbers from 0.4 to 1.4. The experimental aspects of the static and wind-on programs are discussed. Jet effects test techniques in general, fow through balance calibrations and tare force corrections, ASME nozzle thrust and mass flow calibrations, test problems and solutions are emphasized.

  7. Modern Experimental Techniques in Turbine Engine Testing

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; Bruckner, R. J.; Bencic, T. J.; Braunscheidel, E. P.

    1996-01-01

    The paper describes application of two modern experimental techniques, thin-film thermocouples and pressure sensitive paint, to measurement in turbine engine components. A growing trend of using computational codes in turbomachinery design and development requires experimental techniques to refocus from overall performance testing to acquisition of detailed data on flow and heat transfer physics to validate these codes for design applications. The discussed experimental techniques satisfy this shift in focus. Both techniques are nonintrusive in practical terms. The thin-film thermocouple technique improves accuracy of surface temperature and heat transfer measurements. The pressure sensitive paint technique supplies areal surface pressure data rather than discrete point values only. The paper summarizes our experience with these techniques and suggests improvements to ease the application of these techniques for future turbomachinery research and code verifications.

  8. Integrated Bayesian Experimental Design

    NASA Astrophysics Data System (ADS)

    Fischer, R.; Dreier, H.; Dinklage, A.; Kurzan, B.; Pasch, E.

    2005-11-01

    Any scientist planning experiments wants to optimize the design of a future experiment with respect to best performance within the scheduled experimental scenarios. Bayesian Experimental Design (BED) aims in finding optimal experimental settings based on an information theoretic utility function. Optimal design parameters are found by maximizing an expected utility function where the future data and the parameters of physical scenarios of interest are marginalized. The goal of the Integrated Bayesian Experimental Design (IBED) concept is to combine experiments as early as on the design phase to mutually exploit the benefits of the other experiments. The Bayesian Integrated Data Analysis (IDA) concept of linking interdependent measurements to provide a validated data base and to exploit synergetic effects will be used to design meta-diagnostics. An example is given by the Thomson scattering (TS) and the interferometry (IF) diagnostics individually, and a set of both. In finding the optimal experimental design for the meta-diagnostic, TS and IF, the strengths of both experiments can be combined to synergistically increase the reliability of results.

  9. Novel/experimental bariatric techniques.

    PubMed

    Thorell, Anders

    2014-01-01

    Due to the documented effects regarding durable and pronounced weight loss as well as improvement/resolution of obesity-associated morbidity, the number of bariatric surgical procedures performed has increased in an 'epidemiologic' fashion during the last decade. Most common/established procedures used today have well-documented effects but are all associated with technique-specific advantages as well as shortcomings. In particular, complications in the short as well as long term constitute a drive for continuous development of new techniques. A common feature of such new techniques is to reduce the degree of surgical trauma by being less invasive. Some of the new techniques used for bariatric treatment have been in clinical practice for a long time. However, due to the lack of controlled data with documentation of their efficacy and risk of complications, these are still to be considered as experimental. Other techniques are newly being introduced, and therefore data on their potential use for treatment of morbidly obese patients are limited. In this article, an overview of some of the most important of such new techniques is given. Some recently presented methodologies in which very sparse documentation is present, but which have been appreciated for being innovative and sometimes controversial, are also mentioned. PMID:24819498

  10. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians. PMID:25541547

  11. Teaching experimental design.

    PubMed

    Fry, Derek J

    2014-01-01

    Awareness of poor design and published concerns over study quality stimulated the development of courses on experimental design intended to improve matters. This article describes some of the thinking behind these courses and how the topics can be presented in a variety of formats. The premises are that education in experimental design should be undertaken with an awareness of educational principles, of how adults learn, and of the particular topics in the subject that need emphasis. For those using laboratory animals, it should include ethical considerations, particularly severity issues, and accommodate learners not confident with mathematics. Basic principles, explanation of fully randomized, randomized block, and factorial designs, and discussion of how to size an experiment form the minimum set of topics. A problem-solving approach can help develop the skills of deciding what are correct experimental units and suitable controls in different experimental scenarios, identifying when an experiment has not been properly randomized or blinded, and selecting the most efficient design for particular experimental situations. Content, pace, and presentation should suit the audience and time available, and variety both within a presentation and in ways of interacting with those being taught is likely to be effective. Details are given of a three-day course based on these ideas, which has been rated informative, educational, and enjoyable, and can form a postgraduate module. It has oral presentations reinforced by group exercises and discussions based on realistic problems, and computer exercises which include some analysis. Other case studies consider a half-day format and a module for animal technicians.

  12. The role of chemometrics in single and sequential extraction assays: a review. Part II. Cluster analysis, multiple linear regression, mixture resolution, experimental design and other techniques.

    PubMed

    Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo

    2011-03-01

    Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. PMID:21334477

  13. Model for Vaccine Design by Prediction of B-Epitopes of IEDB Given Perturbations in Peptide Sequence, In Vivo Process, Experimental Techniques, and Source or Host Organisms

    PubMed Central

    González-Díaz, Humberto; Pérez-Montoto, Lázaro G.; Ubeira, Florencio M.

    2014-01-01

    Perturbation methods add variation terms to a known experimental solution of one problem to approach a solution for a related problem without known exact solution. One problem of this type in immunology is the prediction of the possible action of epitope of one peptide after a perturbation or variation in the structure of a known peptide and/or other boundary conditions (host organism, biological process, and experimental assay). However, to the best of our knowledge, there are no reports of general-purpose perturbation models to solve this problem. In a recent work, we introduced a new quantitative structure-property relationship theory for the study of perturbations in complex biomolecular systems. In this work, we developed the first model able to classify more than 200,000 cases of perturbations with accuracy, sensitivity, and specificity >90% both in training and validation series. The perturbations include structural changes in >50000 peptides determined in experimental assays with boundary conditions involving >500 source organisms, >50 host organisms, >10 biological process, and >30 experimental techniques. The model may be useful for the prediction of new epitopes or the optimization of known peptides towards computational vaccine design. PMID:24741624

  14. Design and experimental demonstration of low-power CMOS magnetic cell manipulation platform using charge recycling technique

    NASA Astrophysics Data System (ADS)

    Niitsu, Kiichi; Yoshida, Kohei; Nakazato, Kazuo

    2016-03-01

    We present the world’s first charge-recycling-based low-power technique of complementary metal-oxide-semiconductor (CMOS) magnetic cell manipulation. CMOS magnetic cell manipulation associated with magnetic beads is a promissing tool for on-chip biomedical-analysis applications such as drug screening because CMOS can integrate control electronics and electro-chemical sensors. However, the conventional CMOS cell manipulation requires considerable power consumption. In this work, by concatenating multiple unit circuits and recycling electric charge among them, power consumption is reduced by a factor of the number of the concatenated unit circuits (1/N). For verifying the effectiveness, test chip was fabricated in a 0.6-µm CMOS. The chip successfully manipulates magnetic microbeads with achieving 49% power reduction (from 51 to 26.2 mW). Even considering the additional serial resistance of the concatenated inductors, nearly theoretical power reduction effect can be confirmed.

  15. EET theoretical design techniques

    NASA Technical Reports Server (NTRS)

    Dwoyer, D. L.

    1981-01-01

    As a part of the EET aerodynamics program an out-of-house program was developed and monitored to provide theoretical procedures useful in the design of transport aircraft. The focus of the effort was to provide tools valid in the nonlinear transonic speed range. The effort was divided into two basic areas, inviscid configuration analysis and design procedures and viscous correction procedures.

  16. Designing an Experimental "Accident"

    ERIC Educational Resources Information Center

    Picker, Lester

    1974-01-01

    Describes an experimental "accident" that resulted in much student learning, seeks help in the identification of nematodes, and suggests biology teachers introduce similar accidents into their teaching to stimulate student interest. (PEB)

  17. Experimental design of a waste glass study

    SciTech Connect

    Piepel, G.F.; Redgate, P.E.; Hrma, P.

    1995-04-01

    A Composition Variation Study (CVS) is being performed to support a future high-level waste glass plant at Hanford. A total of 147 glasses, covering a broad region of compositions melting at approximately 1150{degrees}C, were tested in five statistically designed experimental phases. This paper focuses on the goals, strategies, and techniques used in designing the five phases. The overall strategy was to investigate glass compositions on the boundary and interior of an experimental region defined by single- component, multiple-component, and property constraints. Statistical optimal experimental design techniques were used to cover various subregions of the experimental region in each phase. Empirical mixture models for glass properties (as functions of glass composition) from previous phases wee used in designing subsequent CVS phases.

  18. Experimental Techniques for Thermodynamic Measurements of Ceramics

    NASA Technical Reports Server (NTRS)

    Jacobson, Nathan S.; Putnam, Robert L.; Navrotsky, Alexandra

    1999-01-01

    Experimental techniques for thermodynamic measurements on ceramic materials are reviewed. For total molar quantities, calorimetry is used. Total enthalpies are determined with combustion calorimetry or solution calorimetry. Heat capacities and entropies are determined with drop calorimetry, differential thermal methods, and adiabatic calorimetry . Three major techniques for determining partial molar quantities are discussed. These are gas equilibration techniques, Knudsen cell methods, and electrochemical techniques. Throughout this report, issues unique to ceramics are emphasized. Ceramic materials encompass a wide range of stabilities and this must be considered. In general data at high temperatures is required and the need for inert container materials presents a particular challenge.

  19. Experimental design in analytical chemistry--part II: applications.

    PubMed

    Ebrahimi-Najafabadi, Heshmatollah; Leardi, Riccardo; Jalali-Heravi, Mehdi

    2014-01-01

    This paper reviews the applications of experimental design to optimize some analytical chemistry techniques such as extraction, chromatography separation, capillary electrophoresis, spectroscopy, and electroanalytical methods.

  20. New experimental techniques for solar cells

    NASA Technical Reports Server (NTRS)

    Lenk, R.

    1993-01-01

    Solar cell capacitance has special importance for an array controlled by shunting. Experimental measurements of solar cell capacitance in the past have shown disagreements of orders of magnitude. Correct measurement technique depends on maintaining the excitation voltage less than the thermal voltage. Two different experimental methods are shown to match theory well, and two effective capacitances are defined for quantifying the effect of the solar cell capacitance on the shunting system.

  1. Elements of Bayesian experimental design

    SciTech Connect

    Sivia, D.S.

    1997-09-01

    We consider some elements of the Bayesian approach that are important for optimal experimental design. While the underlying principles used are very general, and are explained in detail in a recent tutorial text, they are applied here to the specific case of characterising the inferential value of different resolution peakshapes. This particular issue was considered earlier by Silver, Sivia and Pynn (1989, 1990a, 1990b), and the following presentation confirms and extends the conclusions of their analysis.

  2. Involving students in experimental design: three approaches.

    PubMed

    McNeal, A P; Silverthorn, D U; Stratton, D B

    1998-12-01

    Many faculty want to involve students more actively in laboratories and in experimental design. However, just "turning them loose in the lab" is time-consuming and can be frustrating for both students and faculty. We describe three different ways of providing structures for labs that require students to design their own experiments but guide the choices. One approach emphasizes invertebrate preparations and classic techniques that students can learn fairly easily. Students must read relevant primary literature and learn each technique in one week, and then design and carry out their own experiments in the next week. Another approach provides a "design framework" for the experiments so that all students are using the same technique and the same statistical comparisons, whereas their experimental questions differ widely. The third approach involves assigning the questions or problems but challenging students to design good protocols to answer these questions. In each case, there is a mixture of structure and freedom that works for the level of the students, the resources available, and our particular aims.

  3. Graphical Models for Quasi-Experimental Designs

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter M.; Hall, Courtney E.; Su, Dan

    2016-01-01

    Experimental and quasi-experimental designs play a central role in estimating cause-effect relationships in education, psychology, and many other fields of the social and behavioral sciences. This paper presents and discusses the causal graphs of experimental and quasi-experimental designs. For quasi-experimental designs the authors demonstrate…

  4. Statistical problems in design technique validation

    SciTech Connect

    Cohen, J.S.

    1980-04-01

    This work is concerned with the statistical validation process for measuring the accuracy of design techniques for solar energy systems. This includes a discussion of the statistical variability inherent in the design and measurement processes and the way in which this variability can dictate the choice of experimental design, choice of data, accuracy of the results, and choice of questions that can be reliably answered in such a study. The approach here is primarily concerned with design procedure validation in the context of the realistic process of system desig, where the discrepancy between measured and predicted results is due to limitations in the mathematical models employed by the procedures and the inaccuracies of input data. A set of guidelines for successful validation methodologies is discussed, and a simplified validation methodology for domestic hot water heaters is presented.

  5. Animal husbandry and experimental design.

    PubMed

    Nevalainen, Timo

    2014-01-01

    If the scientist needs to contact the animal facility after any study to inquire about husbandry details, this represents a lost opportunity, which can ultimately interfere with the study results and their interpretation. There is a clear tendency for authors to describe methodological procedures down to the smallest detail, but at the same time to provide minimal information on animals and their husbandry. Controlling all major variables as far as possible is the key issue when establishing an experimental design. The other common mechanism affecting study results is a change in the variation. Factors causing bias or variation changes are also detectable within husbandry. Our lives and the lives of animals are governed by cycles: the seasons, the reproductive cycle, the weekend-working days, the cage change/room sanitation cycle, and the diurnal rhythm. Some of these may be attributable to routine husbandry, and the rest are cycles, which may be affected by husbandry procedures. Other issues to be considered are consequences of in-house transport, restrictions caused by caging, randomization of cage location, the physical environment inside the cage, the acoustic environment audible to animals, olfactory environment, materials in the cage, cage complexity, feeding regimens, kinship, and humans. Laboratory animal husbandry issues are an integral but underappreciated part of investigators' experimental design, which if ignored can cause major interference with the results. All researchers should familiarize themselves with the current routine animal care of the facility serving them, including their capabilities for the monitoring of biological and physicochemical environment.

  6. Design for reliability of BEoL and 3-D TSV structures - A joint effort of FEA and innovative experimental techniques

    NASA Astrophysics Data System (ADS)

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-01

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That's why the authors pushed combined simulative/experimental approaches to extract the Young's-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  7. Quasi-Experimental Designs for Causal Inference

    ERIC Educational Resources Information Center

    Kim, Yongnam; Steiner, Peter

    2016-01-01

    When randomized experiments are infeasible, quasi-experimental designs can be exploited to evaluate causal treatment effects. The strongest quasi-experimental designs for causal inference are regression discontinuity designs, instrumental variable designs, matching and propensity score designs, and comparative interrupted time series designs. This…

  8. Bioinspiration: applying mechanical design to experimental biology.

    PubMed

    Flammang, Brooke E; Porter, Marianne E

    2011-07-01

    The production of bioinspired and biomimetic constructs has fostered much collaboration between biologists and engineers, although the extent of biological accuracy employed in the designs produced has not always been a priority. Even the exact definitions of "bioinspired" and "biomimetic" differ among biologists, engineers, and industrial designers, leading to confusion regarding the level of integration and replication of biological principles and physiology. By any name, biologically-inspired mechanical constructs have become an increasingly important research tool in experimental biology, offering the opportunity to focus research by creating model organisms that can be easily manipulated to fill a desired parameter space of structural and functional repertoires. Innovative researchers with both biological and engineering backgrounds have found ways to use bioinspired models to explore the biomechanics of organisms from all kingdoms to answer a variety of different questions. Bringing together these biologists and engineers will hopefully result in an open discourse of techniques and fruitful collaborations for experimental and industrial endeavors.

  9. Design for reliability of BEoL and 3-D TSV structures – A joint effort of FEA and innovative experimental techniques

    SciTech Connect

    Auersperg, Jürgen; Vogel, Dietmar; Auerswald, Ellen; Rzepka, Sven; Michel, Bernd

    2014-06-19

    Copper-TSVs for 3D-IC-integration generate novel challenges for reliability analysis and prediction, e.g. the need to master multiple failure criteria for combined loading including residual stress, interface delamination, cracking and fatigue issues. So, the thermal expansion mismatch between copper and silicon leads to a stress situation in silicon surrounding the TSVs which is influencing the electron mobility and as a result the transient behavior of transistors. Furthermore, pumping and protrusion of copper is a challenge for Back-end of Line (BEoL) layers of advanced CMOS technologies already during manufacturing. These effects depend highly on the temperature dependent elastic-plastic behavior of the TSV-copper and the residual stresses determined by the electro deposition chemistry and annealing conditions. That’s why the authors pushed combined simulative/experimental approaches to extract the Young’s-modulus, initial yield stress and hardening coefficients in copper-TSVs from nanoindentation experiments, as well as the temperature dependent initial yield stress and hardening coefficients from bow measurements due to electroplated thin copper films on silicon under thermal cycling conditions. A FIB trench technique combined with digital image correlation is furthermore used to capture the residual stress state near the surface of TSVs. The extracted properties are discussed and used accordingly to investigate the pumping and protrusion of copper-TSVs during thermal cycling. Moreover, the cracking and delamination risks caused by the elevated temperature variation during BEoL ILD deposition are investigated with the help of fracture mechanics approaches.

  10. Control design for the SERC experimental testbeds

    NASA Technical Reports Server (NTRS)

    Jacques, Robert; Blackwood, Gary; Macmartin, Douglas G.; How, Jonathan; Anderson, Eric

    1992-01-01

    Viewgraphs on control design for the Space Engineering Research Center experimental testbeds are presented. Topics covered include: SISO control design and results; sensor and actuator location; model identification; control design; experimental results; preliminary LAC experimental results; active vibration isolation problem statement; base flexibility coupling into isolation feedback loop; cantilever beam testbed; and closed loop results.

  11. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  12. Experimental design methods for bioengineering applications.

    PubMed

    Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri

    2016-01-01

    Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.

  13. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  14. Experimental Investigation of Centrifugal Compressor Stabilization Techniques

    NASA Technical Reports Server (NTRS)

    Skoch, Gary J.

    2003-01-01

    Results from a series of experiments to investigate techniques for extending the stable flow range of a centrifugal compressor are reported. The research was conducted in a high-speed centrifugal compressor at the NASA Glenn Research Center. The stabilizing effect of steadily flowing air-streams injected into the vaneless region of a vane-island diffuser through the shroud surface is described. Parametric variations of injection angle, injection flow rate, number of injectors, injector spacing, and injection versus bleed were investigated for a range of impeller speeds and tip clearances. Both the compressor discharge and an external source were used for the injection air supply. The stabilizing effect of flow obstructions created by tubes that were inserted into the diffuser vaneless space through the shroud was also investigated. Tube immersion into the vaneless space was varied in the flow obstruction experiments. Results from testing done at impeller design speed and tip clearance are presented. Surge margin improved by 1.7 points using injection air that was supplied from within the compressor. Externally supplied injection air was used to return the compressor to stable operation after being throttled into surge. The tubes, which were capped to prevent mass flux, provided 9.3 points of additional surge margin over the baseline surge margin of 11.7 points.

  15. The Experimental Design Ability Test (EDAT)

    ERIC Educational Resources Information Center

    Sirum, Karen; Humburg, Jennifer

    2011-01-01

    Higher education goals include helping students develop evidence based reasoning skills; therefore, scientific thinking skills such as those required to understand the design of a basic experiment are important. The Experimental Design Ability Test (EDAT) measures students' understanding of the criteria for good experimental design through their…

  16. Optimizing Experimental Designs: Finding Hidden Treasure.

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Classical experimental design theory, the predominant treatment in most textbooks, promotes the use of blocking designs for control of spatial variability in field studies and other situations in which there is significant variation among heterogeneity among experimental units. Many blocking design...

  17. Telecommunications Systems Design Techniques Handbook

    NASA Technical Reports Server (NTRS)

    Edelson, R. E. (Editor)

    1972-01-01

    The Deep Space Network (DSN) increasingly supports deep space missions sponsored and managed by organizations without long experience in DSN design and operation. The document is intended as a textbook for those DSN users inexperienced in the design and specification of a DSN-compatible spacecraft telecommunications system. For experienced DSN users, the document provides a reference source of telecommunication information which summarizes knowledge previously available only in a multitude of sources. Extensive references are quoted for those who wish to explore specific areas more deeply.

  18. Design Techniques for Integrated Feedback.

    ERIC Educational Resources Information Center

    Markesjo, Gunnar; Graham, Peter

    A model for courses in which media are used has been designed by a research group at the Royel Institute of Technology in Stockholm. The model suggests that instruction be planned for in weekly packages. These should include a limited number of instructional aids, should begin with a motivating section, and should offer training in the solving of…

  19. GCFR shielding design and supporting experimental programs

    SciTech Connect

    Perkins, R.G.; Hamilton, C.J.; Bartine, D.

    1980-05-01

    The shielding for the conceptual design of the gas-cooled fast breeder reactor (GCFR) is described, and the component exposure design criteria which determine the shield design are presented. The experimental programs for validating the GCFR shielding design methods and data (which have been in existence since 1976) are also discussed.

  20. Experimental Design for the Evaluation of Detection Techniques of Hidden Corrosion Beneath the Thermal Protective System of the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Kemmerer, Catherine C.; Jacoby, Joseph A.; Lomness, Janice K.; Hintze, Paul E.; Russell, Richard W.

    2007-01-01

    The detection of corrosion beneath Space Shuttle Orbiter thermal protective system is traditionally accomplished by removing the Reusable Surface Insulation tiles and performing a visual inspection of the aluminum substrate and corrosion protection system. This process is time consuming and has the potential to damage high cost tiles. To evaluate non-intrusive NDE methods, a Proof of Concept (PoC) experiment was designed and test panels were manufactured. The objective of the test plan was three-fold: establish the ability to detect corrosion hidden from view by tiles; determine the key factor affecting detectability; roughly quantify the detection threshold. The plan consisted of artificially inducing dimensionally controlled corrosion spots in two panels and rebonding tile over the spots to model the thermal protective system of the orbiter. The corrosion spot diameter ranged from 0.100" to 0.600" inches and the depth ranged from 0.003" to 0.020". One panel consisted of a complete factorial array of corrosion spots with and without tile coverage. The second panel consisted of randomized factorial points replicated and hidden by tile. Conventional methods such as ultrasonics, infrared, eddy current and microwave methods have shortcomings. Ultrasonics and IR cannot sufficiently penetrate the tiles, while eddy current and microwaves have inadequate resolution. As such, the panels were interrogated using Backscatter Radiography and Terahertz Imaging. The terahertz system successfully detected artificially induced corrosion spots under orbiter tile and functional testing is in-work in preparation for implementation.

  1. OSHA and Experimental Safety Design.

    ERIC Educational Resources Information Center

    Sichak, Stephen, Jr.

    1983-01-01

    Suggests that a governmental agency, most likely Occupational Safety and Health Administration (OSHA) be considered in the safety design stage of any experiment. Focusing on OSHA's role, discusses such topics as occupational health hazards of toxic chemicals in laboratories, occupational exposure to benzene, and role/regulations of other agencies.…

  2. Experimental Design for the LATOR Mission

    NASA Technical Reports Server (NTRS)

    Turyshev, Slava G.; Shao, Michael; Nordtvedt, Kenneth, Jr.

    2004-01-01

    This paper discusses experimental design for the Laser Astrometric Test Of Relativity (LATOR) mission. LATOR is designed to reach unprecedented accuracy of 1 part in 10(exp 8) in measuring the curvature of the solar gravitational field as given by the value of the key Eddington post-Newtonian parameter gamma. This mission will demonstrate the accuracy needed to measure effects of the next post-Newtonian order (near infinity G2) of light deflection resulting from gravity s intrinsic non-linearity. LATOR will provide the first precise measurement of the solar quadrupole moment parameter, J(sub 2), and will improve determination of a variety of relativistic effects including Lense-Thirring precession. The mission will benefit from the recent progress in the optical communication technologies the immediate and natural step above the standard radio-metric techniques. The key element of LATOR is a geometric redundancy provided by the laser ranging and long-baseline optical interferometry. We discuss the mission and optical designs, as well as the expected performance of this proposed mission. LATOR will lead to very robust advances in the tests of Fundamental physics: this mission could discover a violation or extension of general relativity, or reveal the presence of an additional long range interaction in the physical law. There are no analogs to the LATOR experiment; it is unique and is a natural culmination of solar system gravity experiments.

  3. New Theoretical Technique for Alloy Design

    NASA Technical Reports Server (NTRS)

    Ferrante, John

    2005-01-01

    During the last 2 years, there has been a breakthrough in alloy design at the NASA Lewis Research Center. A new semi-empirical theoretical technique for alloys, the BFS Theory (Bozzolo, Ferrante, and Smith), has been used to design alloys on a computer. BFS was used, along with Monte Carlo techniques, to predict the phases of ternary alloys of NiAl with Ti or Cr additions. High concentrations of each additive were used to demonstrate the resulting structures.

  4. Helioseismology in a bottle: an experimental technique

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Zimmerman, D. S.; Nataf, H.; Thorette, A.; Cabanes, S.; Roux, P.; Lekic, V.; Lathrop, D. P.

    2013-12-01

    Measurement of the differential rotation of the Sun's interior is one of the great achievements of helioseismology, providing important constraints for stellar physics. The technique relies on observing and analyzing rotationally-induced splittings of p-modes in the star. Here we demonstrate the first use of the technique in a laboratory setting. We apply it in a spherical cavity with a spinning central core (spherical Couette flow) to determine the azimuthal velocity of the air filling the cavity. We excite a number of acoustic resonances (analogous to p-modes in the Sun) using a speaker and record the response with an array of small microphones and/or accelerometers on the outer sphere. Many observed acoustic modes show rotationally-induced splittings which allow us to perform an inversion to determine the air's azimuthal velocity as a function of both radius and latitude. We validate the method by comparing the velocity field obtained through inversion against the velocity profile measured with a calibrated hot film anemometer. The technique has great potential for laboratory setups involving rotating fluids in axisymmetric cavities, and we hope it will be especially useful in liquid metals. Acoustic spectra showing rotationally induced splittings. Top figure is the spectra recorded from a microphone near the equator and lower figure from a microphone at high latitude. Color indicates core's rotation rate in Hz.

  5. Designing High Quality Research in Special Education: Group Experimental Designs.

    ERIC Educational Resources Information Center

    Gersten, Russell; Lloyd, John Wills; Baker, Scott

    This paper, a result of a series of meetings of researchers, discusses critical issues related to the conduct of high-quality intervention research in special education using experimental and quasi-experimental designs that compare outcomes for different groups of students. It stresses the need to balance design components that satisfy laboratory…

  6. Optimal Experimental Design for Model Discrimination

    PubMed Central

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983

  7. Presentation and Impact of Experimental Techniques in Chemistry

    ERIC Educational Resources Information Center

    Sojka, Zbigniew; Che, Michel

    2008-01-01

    Laboratory and practical courses, where students become familiar with experimental techniques and learn to interpret data and relate them to appropriate theory, play a vital role in chemical education. In the large panoply of currently available techniques, it is difficult to find a rational and easy way to classify the techniques in relation to…

  8. Drought Adaptation Mechanisms Should Guide Experimental Design.

    PubMed

    Gilbert, Matthew E; Medina, Viviana

    2016-08-01

    The mechanism, or hypothesis, of how a plant might be adapted to drought should strongly influence experimental design. For instance, an experiment testing for water conservation should be distinct from a damage-tolerance evaluation. We define here four new, general mechanisms for plant adaptation to drought such that experiments can be more easily designed based upon the definitions. A series of experimental methods are suggested together with appropriate physiological measurements related to the drought adaptation mechanisms. The suggestion is made that the experimental manipulation should match the rate, length, and severity of soil water deficit (SWD) necessary to test the hypothesized type of drought adaptation mechanism. PMID:27090148

  9. Yakima Hatchery Experimental Design : Annual Progress Report.

    SciTech Connect

    Busack, Craig; Knudsen, Curtis; Marshall, Anne

    1991-08-01

    This progress report details the results and status of Washington Department of Fisheries' (WDF) pre-facility monitoring, research, and evaluation efforts, through May 1991, designed to support the development of an Experimental Design Plan (EDP) for the Yakima/Klickitat Fisheries Project (YKFP), previously termed the Yakima/Klickitat Production Project (YKPP or Y/KPP). This pre- facility work has been guided by planning efforts of various research and quality control teams of the project that are annually captured as revisions to the experimental design and pre-facility work plans. The current objective are as follows: to develop genetic monitoring and evaluation approach for the Y/KPP; to evaluate stock identification monitoring tools, approaches, and opportunities available to meet specific objectives of the experimental plan; and to evaluate adult and juvenile enumeration and sampling/collection capabilities in the Y/KPP necessary to measure experimental response variables.

  10. Two-stage microbial community experimental design.

    PubMed

    Tickle, Timothy L; Segata, Nicola; Waldron, Levi; Weingart, Uri; Huttenhower, Curtis

    2013-12-01

    Microbial community samples can be efficiently surveyed in high throughput by sequencing markers such as the 16S ribosomal RNA gene. Often, a collection of samples is then selected for subsequent metagenomic, metabolomic or other follow-up. Two-stage study design has long been used in ecology but has not yet been studied in-depth for high-throughput microbial community investigations. To avoid ad hoc sample selection, we developed and validated several purposive sample selection methods for two-stage studies (that is, biological criteria) targeting differing types of microbial communities. These methods select follow-up samples from large community surveys, with criteria including samples typical of the initially surveyed population, targeting specific microbial clades or rare species, maximizing diversity, representing extreme or deviant communities, or identifying communities distinct or discriminating among environment or host phenotypes. The accuracies of each sampling technique and their influences on the characteristics of the resulting selected microbial community were evaluated using both simulated and experimental data. Specifically, all criteria were able to identify samples whose properties were accurately retained in 318 paired 16S amplicon and whole-community metagenomic (follow-up) samples from the Human Microbiome Project. Some selection criteria resulted in follow-up samples that were strongly non-representative of the original survey population; diversity maximization particularly undersampled community configurations. Only selection of intentionally representative samples minimized differences in the selected sample set from the original microbial survey. An implementation is provided as the microPITA (Microbiomes: Picking Interesting Taxa for Analysis) software for two-stage study design of microbial communities.

  11. Using experimental design to define boundary manikins.

    PubMed

    Bertilsson, Erik; Högberg, Dan; Hanson, Lars

    2012-01-01

    When evaluating human-machine interaction it is central to consider anthropometric diversity to ensure intended accommodation levels. A well-known method is the use of boundary cases where manikins with extreme but likely measurement combinations are derived by mathematical treatment of anthropometric data. The supposition by that method is that the use of these manikins will facilitate accommodation of the expected part of the total, less extreme, population. In literature sources there are differences in how many and in what way these manikins should be defined. A similar field to the boundary case method is the use of experimental design in where relationships between affecting factors of a process is studied by a systematic approach. This paper examines the possibilities to adopt methodology used in experimental design to define a group of manikins. Different experimental designs were adopted to be used together with a confidence region and its axes. The result from the study shows that it is possible to adapt the methodology of experimental design when creating groups of manikins. The size of these groups of manikins depends heavily on the number of key measurements but also on the type of chosen experimental design. PMID:22317428

  12. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, Darrell; Curtis, Andrew

    2011-08-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms. This paper examines the influence of the NFL theorems on linearized statistical experimental design (SED). We consider four design algorithms with three different design objective functions to examine their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent to the study of transverse isotropy in many disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. We discuss differences in the performance of each design algorithm, providing a guideline for selecting design algorithms for other problems. As a by-product we demonstrate and discuss the principle of diminishing returns in SED, namely, that the value of experimental design decreases with experiment size. Another outcome of this study is a simple rule-of-thumb for prescribing optimal experiments for ellipse fitting, which bypasses the computational expense of SED. This is used to define a template for optimizing survey designs, under simple assumptions, for Amplitude Variations with Azimuth and Offset (AVAZ) seismics in the specialized problem of fracture characterization, such as is of interest in the petroleum industry. Finally, we discuss the scope of our conclusions for the NFL theorems as they apply to nonlinear and Bayesian SED.

  13. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  14. FPGAs in Space Environment and Design Techniques

    NASA Technical Reports Server (NTRS)

    Katz, Richard B.; Day, John H. (Technical Monitor)

    2001-01-01

    This viewgraph presentation gives an overview of Field Programmable Gate Arrays (FPGA) in the space environment and design techniques. Details are given on the effects of the space radiation environment, total radiation dose, single event upset, single event latchup, single event transient, antifuse technology and gate rupture, proton upsets and sensitivity, and loss of functionality.

  15. Evaluation of Advanced Retrieval Techniques in an Experimental Online Catalog.

    ERIC Educational Resources Information Center

    Larson, Ray R.

    1992-01-01

    Discusses subject searching problems in online library catalogs; explains advanced information retrieval (IR) techniques; and describes experiments conducted on a test collection database, CHESHIRE (California Hybrid Extended SMART for Hypertext and Information Retrieval Experimentation), which was created to evaluate IR techniques in online…

  16. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same

  17. Simulation as an Aid to Experimental Design.

    ERIC Educational Resources Information Center

    Frazer, Jack W.; And Others

    1983-01-01

    Discusses simulation program to aid in the design of enzyme kinetic experimentation (includes sample runs). Concentration versus time profiles of any subset or all nine states of reactions can be displayed with/without simulated instrumental noise, allowing the user to estimate the practicality of any proposed experiment given known instrument…

  18. Optimal Experimental Design for Model Discrimination

    ERIC Educational Resources Information Center

    Myung, Jay I.; Pitt, Mark A.

    2009-01-01

    Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…

  19. A free lunch in linearized experimental design?

    NASA Astrophysics Data System (ADS)

    Coles, D.; Curtis, A.

    2009-12-01

    The No Free Lunch (NFL) theorems state that no single optimization algorithm is ideally suited for all objective functions and, conversely, that no single objective function is ideally suited for all optimization algorithms (Wolpert and Macready, 1997). It is therefore of limited use to report the performance of a particular algorithm with respect to a particular objective function because the results cannot be safely extrapolated to other algorithms or objective functions. We examine the influence of the NFL theorems on linearized statistical experimental design (SED). We are aware of no publication that compares multiple design criteria in combination with multiple design algorithms. We examine four design algorithms in concert with three design objective functions to assess their interdependency. As a foundation for the study, we consider experimental designs for fitting ellipses to data, a problem pertinent, for example, to the study of transverse isotropy in a variety of disciplines. Surprisingly, we find that the quality of optimized experiments, and the computational efficiency of their optimization, is generally independent of the criterion-algorithm pairing. This is promising for linearized SED. While the NFL theorems must generally be true, the criterion-algorithm pairings we investigated are fairly robust to the theorems, indicating that we need not account for independency when choosing design algorithms and criteria from the set examined here. However, particular design algorithms do show patterns of performance, irrespective of the design criterion, and from this we establish a rough guideline for choosing from the examined algorithms for other design problems. As a by-product of our study we demonstrate that SED is subject to the principle of diminishing returns. That is, we see that the value of experimental design decreases with survey size, a fact that must be considered when deciding whether or not to design an experiment at all. Another outcome

  20. Conceptual design of Fusion Experimental Reactor

    NASA Astrophysics Data System (ADS)

    Seki, Yasushi; Takatsu, Hideyuki; Iida, Hiromasa

    1991-08-01

    Safety analysis and evaluation have been made for the FER (Fusion Experimental Reactor) as well as for the ITER (International Thermonuclear Experimental Reactor) which are basically the same in terms of safety. This report describes the results obtained in fiscal years 1988 - 1990, in addition to a summary of the results obtained prior to 1988. The report shows the philosophy of the safety design, safety analysis and evaluation for each of the operation conditions, namely, normal operation, repair and maintenance, and accident. Considerations for safety regulations and standards are also added.

  1. Design automation techniques for custom LSI arrays

    NASA Technical Reports Server (NTRS)

    Feller, A.

    1975-01-01

    The standard cell design automation technique is described as an approach for generating random logic PMOS, CMOS or CMOS/SOS custom large scale integration arrays with low initial nonrecurring costs and quick turnaround time or design cycle. The system is composed of predesigned circuit functions or cells and computer programs capable of automatic placement and interconnection of the cells in accordance with an input data net list. The program generates a set of instructions to drive an automatic precision artwork generator. A series of support design automation and simulation programs are described, including programs for verifying correctness of the logic on the arrays, performing dc and dynamic analysis of MOS devices, and generating test sequences.

  2. Rational Experimental Design for Electrical Resistivity Imaging

    NASA Astrophysics Data System (ADS)

    Mitchell, V.; Pidlisecky, A.; Knight, R.

    2008-12-01

    Over the past several decades advances in the acquisition and processing of electrical resistivity data, through multi-channel acquisition systems and new inversion algorithms, have greatly increased the value of these data to near-surface environmental and hydrological problems. There has, however, been relatively little advancement in the design of actual surveys. Data acquisition still typically involves using a small number of traditional arrays (e.g. Wenner, Schlumberger) despite a demonstrated improvement in data quality from the use of non-standard arrays. While optimized experimental design has been widely studied in applied mathematics and the physical and biological sciences, it is rarely implemented for non-linear problems, such as electrical resistivity imaging (ERI). We focus specifically on using ERI in the field for monitoring changes in the subsurface electrical resistivity structure. For this application we seek an experimental design method that can be used in the field to modify the data acquisition scheme (spatial and temporal sampling) based on prior knowledge of the site and/or knowledge gained during the imaging experiment. Some recent studies have investigated optimized design of electrical resistivity surveys by linearizing the problem or with computationally-intensive search algorithms. We propose a method for rational experimental design based on the concept of informed imaging, the use of prior information regarding subsurface properties and processes to develop problem-specific data acquisition and inversion schemes. Specifically, we use realistic subsurface resistivity models to aid in choosing source configurations that maximize the information content of our data. Our approach is based on first assessing the current density within a region of interest, in order to provide sufficient energy to the region of interest to overcome a noise threshold, and then evaluating the direction of current vectors, in order to maximize the

  3. Achieving optimal SERS through enhanced experimental design

    PubMed Central

    Fisk, Heidi; Westley, Chloe; Turner, Nicholas J.

    2016-01-01

    One of the current limitations surrounding surface‐enhanced Raman scattering (SERS) is the perceived lack of reproducibility. SERS is indeed challenging, and for analyte detection, it is vital that the analyte interacts with the metal surface. However, as this is analyte dependent, there is not a single set of SERS conditions that are universal. This means that experimental optimisation for optimum SERS response is vital. Most researchers optimise one factor at a time, where a single parameter is altered first before going onto optimise the next. This is a very inefficient way of searching the experimental landscape. In this review, we explore the use of more powerful multivariate approaches to SERS experimental optimisation based on design of experiments and evolutionary computational methods. We particularly focus on colloidal‐based SERS rather than thin film preparations as a result of their popularity. © 2015 The Authors. Journal of Raman Spectroscopy published by John Wiley & Sons, Ltd. PMID:27587905

  4. Experimental data of solubility at different temperatures: a simple technique

    NASA Astrophysics Data System (ADS)

    Burghoff, J.; Nolte, S.; Tünnermann, A.

    2007-10-01

    This article describes a simple and inexpensive experimental technique, easy to set-up in a laboratory, for the measurement of solute solubilities in liquids (or gases). Experimental values of solubility were determined for the dissolution of benzoic acid in water, at 293 338 K, of 2-naphthol in water, at 293 373 K, and of salicylic acid in water, at 293 343 K. The experimental results obtained are in good agreement with the theoretical values of solubilities presented in literature. Empirical correlations are presented for the prediction of solubility over the entire range of temperatures studied, and they are shown to give the solubility value with very good accuracy.

  5. Irradiation Design for an Experimental Murine Model

    SciTech Connect

    Ballesteros-Zebadua, P.; Moreno-Jimenez, S.; Suarez-Campos, J. E.; Celis, M. A.; Larraga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Rubio-Osornio, M. C.; Custodio-Ramirez, V.; Paz, C.

    2010-12-07

    In radiotherapy and stereotactic radiosurgery, small animal experimental models are frequently used, since there are still a lot of unsolved questions about the biological and biochemical effects of ionizing radiation. This work presents a method for small-animal brain radiotherapy compatible with a dedicated 6MV Linac. This rodent model is focused on the research of the inflammatory effects produced by ionizing radiation in the brain. In this work comparisons between Pencil Beam and Monte Carlo techniques, were used in order to evaluate accuracy of the calculated dose using a commercial planning system. Challenges in this murine model are discussed.

  6. Set membership experimental design for biological systems

    PubMed Central

    2012-01-01

    Background Experimental design approaches for biological systems are needed to help conserve the limited resources that are allocated for performing experiments. The assumptions used when assigning probability density functions to characterize uncertainty in biological systems are unwarranted when only a small number of measurements can be obtained. In these situations, the uncertainty in biological systems is more appropriately characterized in a bounded-error context. Additionally, effort must be made to improve the connection between modelers and experimentalists by relating design metrics to biologically relevant information. Bounded-error experimental design approaches that can assess the impact of additional measurements on model uncertainty are needed to identify the most appropriate balance between the collection of data and the availability of resources. Results In this work we develop a bounded-error experimental design framework for nonlinear continuous-time systems when few data measurements are available. This approach leverages many of the recent advances in bounded-error parameter and state estimation methods that use interval analysis to generate parameter sets and state bounds consistent with uncertain data measurements. We devise a novel approach using set-based uncertainty propagation to estimate measurement ranges at candidate time points. We then use these estimated measurements at the candidate time points to evaluate which candidate measurements furthest reduce model uncertainty. A method for quickly combining multiple candidate time points is presented and allows for determining the effect of adding multiple measurements. Biologically relevant metrics are developed and used to predict when new data measurements should be acquired, which system components should be measured and how many additional measurements should be obtained. Conclusions The practicability of our approach is illustrated with a case study. This study shows that our

  7. Ceramic processing: Experimental design and optimization

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.; Lauben, David N.; Madrid, Philip

    1992-01-01

    The objectives of this paper are to: (1) gain insight into the processing of ceramics and how green processing can affect the properties of ceramics; (2) investigate the technique of slip casting; (3) learn how heat treatment and temperature contribute to density, strength, and effects of under and over firing to ceramic properties; (4) experience some of the problems inherent in testing brittle materials and learn about the statistical nature of the strength of ceramics; (5) investigate orthogonal arrays as tools to examine the effect of many experimental parameters using a minimum number of experiments; (6) recognize appropriate uses for clay based ceramics; and (7) measure several different properties important to ceramic use and optimize them for a given application.

  8. Automatic Molecular Design using Evolutionary Techniques

    NASA Technical Reports Server (NTRS)

    Globus, Al; Lawton, John; Wipke, Todd; Saini, Subhash (Technical Monitor)

    1998-01-01

    Molecular nanotechnology is the precise, three-dimensional control of materials and devices at the atomic scale. An important part of nanotechnology is the design of molecules for specific purposes. This paper describes early results using genetic software techniques to automatically design molecules under the control of a fitness function. The fitness function must be capable of determining which of two arbitrary molecules is better for a specific task. The software begins by generating a population of random molecules. The population is then evolved towards greater fitness by randomly combining parts of the better individuals to create new molecules. These new molecules then replace some of the worst molecules in the population. The unique aspect of our approach is that we apply genetic crossover to molecules represented by graphs, i.e., sets of atoms and the bonds that connect them. We present evidence suggesting that crossover alone, operating on graphs, can evolve any possible molecule given an appropriate fitness function and a population containing both rings and chains. Prior work evolved strings or trees that were subsequently processed to generate molecular graphs. In principle, genetic graph software should be able to evolve other graph representable systems such as circuits, transportation networks, metabolic pathways, computer networks, etc.

  9. An Experimental Test of a Craving Management Technique for Adolescents in Substance-Abuse Treatment

    ERIC Educational Resources Information Center

    Florsheim, Paul; Heavin, Sarah; Tiffany, Stephen; Colvin, Peter; Hiraoka, Regina

    2008-01-01

    This paper describes an experiment designed to test an imagery-based craving management technique with a sample of adolescents diagnosed with substance-use disorders. Seventy adolescents between the ages of 14 and 18 (41 males) were recruited through two substance-abuse treatment programs. The experimental procedure involved stimulating craving…

  10. Fourier transform approach in modulation technique of experimental measurements.

    PubMed

    Khazimullin, M V; Lebedev, Yu A

    2010-04-01

    An application of Fourier transform approach in modulation technique of experimental studies is considered. This method has obvious advantages compared with traditional lock-in amplifiers technique--simple experimental setup, a quickly available information on all the required harmonics, high speed of data processing using fast Fourier transform algorithm. A computationally simple, fast and accurate Fourier coefficients interpolation (FCI) method has been implemented to obtain a useful information from harmonics of a multimode signal. Our analysis shows that in this case FCI method has a systematical error (bias) of a signal parameters estimation, which became essential for the short data sets. Hence, a new differential Fourier coefficients interpolation (DFCI) method has been suggested, which is less sensitive to a presence of several modes in a signal. The analysis has been confirmed by simulations and measurements of a quartz wedge birefringence by means of the photoelastic modulator. The obtained bias, noise level, and measuring speed are comparable and even better than in lock-in amplifier technique. Moreover, presented DFCI method is expected to be promised candidate for using in actively developing imaging systems based on the modulation technique requiring fast digital signal processing of large data sets.

  11. Nonlinear potential analysis techniques for supersonic-hypersonic aerodynamic design

    NASA Technical Reports Server (NTRS)

    Shankar, V.; Clever, W. C.

    1984-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at supersonic and moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to conceptual configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with experimental results for a variety of wing, body, and wing-body shapes.

  12. Machine Learning Techniques in Optimal Design

    NASA Technical Reports Server (NTRS)

    Cerbone, Giuseppe

    1992-01-01

    Many important applications can be formalized as constrained optimization tasks. For example, we are studying the engineering domain of two-dimensional (2-D) structural design. In this task, the goal is to design a structure of minimum weight that bears a set of loads. A solution to a design problem in which there is a single load (L) and two stationary support points (S1 and S2) consists of four members, E1, E2, E3, and E4 that connect the load to the support points is discussed. In principle, optimal solutions to problems of this kind can be found by numerical optimization techniques. However, in practice [Vanderplaats, 1984] these methods are slow and they can produce different local solutions whose quality (ratio to the global optimum) varies with the choice of starting points. Hence, their applicability to real-world problems is severely restricted. To overcome these limitations, we propose to augment numerical optimization by first performing a symbolic compilation stage to produce: (a) objective functions that are faster to evaluate and that depend less on the choice of the starting point and (b) selection rules that associate problem instances to a set of recommended solutions. These goals are accomplished by successive specializations of the problem class and of the associated objective functions. In the end, this process reduces the problem to a collection of independent functions that are fast to evaluate, that can be differentiated symbolically, and that represent smaller regions of the overall search space. However, the specialization process can produce a large number of sub-problems. This is overcome by deriving inductively selection rules which associate problems to small sets of specialized independent sub-problems. Each set of candidate solutions is chosen to minimize a cost function which expresses the tradeoff between the quality of the solution that can be obtained from the sub-problem and the time it takes to produce it. The overall solution

  13. Optimizing an experimental design for an electromagnetic experiment

    NASA Astrophysics Data System (ADS)

    Roux, Estelle; Garcia, Xavier

    2013-04-01

    Most of geophysical studies focus on data acquisition and analysis, but another aspect which is gaining importance is the discussion on acquisition of suitable datasets. This can be done through the design of an optimal experiment. Optimizing an experimental design implies a compromise between maximizing the information we get about the target and reducing the cost of the experiment, considering a wide range of constraints (logistical, financial, experimental …). We are currently developing a method to design an optimal controlled-source electromagnetic (CSEM) experiment to detect a potential CO2 reservoir and monitor this reservoir during and after CO2 injection. Our statistical algorithm combines the use of linearized inverse theory (to evaluate the quality of one given design via the objective function) and stochastic optimization methods like genetic algorithm (to examine a wide range of possible surveys). The particularity of our method is that it uses a multi-objective genetic algorithm that searches for designs that fit several objective functions simultaneously. One main advantage of this kind of technique to design an experiment is that it does not require the acquisition of any data and can thus be easily conducted before any geophysical survey. Our new experimental design algorithm has been tested with a realistic one-dimensional resistivity model of the Earth in the region of study (northern Spain CO2 sequestration test site). We show that a small number of well distributed observations have the potential to resolve the target. This simple test also points out the importance of a well chosen objective function. Finally, in the context of CO2 sequestration that motivates this study, we might be interested in maximizing the information we get about the reservoir layer. In that case, we show how the combination of two different objective functions considerably improve its resolution.

  14. Hybridizing experimental, numerical, and analytical stress analysis techniques

    NASA Astrophysics Data System (ADS)

    Rowlands, Robert E.

    2001-06-01

    Good measurements enjoy the advantage of conveying what actually occurs. However, recognizing that vast amounts of displacement, strain and/or stress-related information can now be recorded at high resolution, effective and reliable means of processing the data become important. It can therefore be advantageous to combine measured result with analytical and computations methods. This presentation will describe such synergism and applications to engineering problems. This includes static and transient analysis, notched and perforated composites, and fracture of composites and fiber-filled cement. Experimental methods of moire, thermo elasticity and strain gages are emphasized. Numerical techniques utilized include pseudo finite-element and boundary-element concepts.

  15. [Design and experimentation of marine optical buoy].

    PubMed

    Yang, Yue-Zhong; Sun, Zhao-Hua; Cao, Wen-Xi; Li, Cai; Zhao, Jun; Zhou, Wen; Lu, Gui-Xin; Ke, Tian-Cun; Guo, Chao-Ying

    2009-02-01

    Marine optical buoy is of important value in terms of calibration and validation of ocean color remote sensing, scientific observation, coastal environment monitoring, etc. A marine optical buoy system was designed which consists of a main and a slave buoy. The system can measure the distribution of irradiance and radiance over the sea surface, in the layer near sea surface and in the euphotic zone synchronously, during which some other parameters are also acquired such as spectral absorption and scattering coefficients of the water column, the velocity and direction of the wind, and so on. The buoy was positioned by GPS. The low-power integrated PC104 computer was used as the control core to collect data automatically. The data and commands were real-timely transmitted by CDMA/GPRS wireless networks or by the maritime satellite. The coastal marine experimentation demonstrated that the buoy has small pitch and roll rates in high sea state conditions and thus can meet the needs of underwater radiometric measurements, the data collection and remote transmission are reliable, and the auto-operated anti-biofouling devices can ensure that the optical sensors work effectively for a period of several months.

  16. Experimental Technique for Studying Aerosols of Lyophilized Bacteria

    PubMed Central

    Cox, Christopher S.; Derr, John S.; Flurie, Eugene G.; Roderick, Roger C.

    1970-01-01

    An experimental technique is presented for studying aerosols generated from lyophilized bacteria by using Escherichia coli B, Bacillus subtilis var. niger, Enterobacter aerogenes, and Pasteurella tularensis. An aerosol generator capable of creating fine particle aerosols of small quantities (10 mg) of lyophilized powder under controlled conditions of exposure to the atmosphere is described. The physical properties of the aerosols are investigated as to the distribution of number of aerosol particles with particle size as well as to the distribution of number of bacteria with particle size. Biologically unstable vegetative cells were quantitated physically by using 14C and Europium chelate stain as tracers, whereas the stable heat-shocked B. subtilis spores were assayed biologically. The physical persistence of the lyophilized B. subtilis aerosol is investigated as a function of size of spore-containing particles. The experimental result that physical persistence of the aerosol in a closed aerosol chamber increases as particle size is decreased is satisfactorily explained on the bases of electrostatic, gravitational, inertial, and diffusion forces operating to remove particles from the particular aerosol system. The net effect of these various forces is to provide, after a short time interval in the system (about 2 min), an aerosol of fine particles with enhanced physical stability. The dependence of physical stability of the aerosol on the species of organism and the nature of the suspending medium for lyophilization is indicated. Also, limitations and general applicability of both the technique and results are discussed. PMID:4992657

  17. Comparison of deaerator performance using experimental and numerical techniques

    NASA Astrophysics Data System (ADS)

    Majji, Sri Harsha

    Deaerator is a component of integrated drive generator (IDG), which is used to separate air from oil. Integrated drive generator is the main power generation unit used in aircrafts to generate electric-power and must be cooled to give maximum efficiency. Mob Jet Oil II is used in these IDGs as a lubricant and coolant. So, in order to get high-quality oil, a deaerator is used to remove trapped air from this Mob Jet Oil II using the centrifugal principle. The reason for entrapment of air may be due to operation of vacuum and high-pressure pumps. In this study, 75/90 IDG generic and A320 classic deaerator performance evaluation was done based on both experimental and numerical techniques. Experimental data was collected from deaerator test rig and numerical data was attained using CFD simulations (software used for CFD simulation is ANSYS CFX). Both experimental and numerical results were compared and also deaerator 75/90 generic and A320 classic was compared in this study. A parametric study on deaerators flow separation and inner geometry was also done in this study. This work also includes a comparison study of different multiphase models and different meshes applied on deaerator numerical test methodology.

  18. An infrared technique for evaluating turbine airfoil cooling designs

    SciTech Connect

    Sweeney, P.C.; Rhodes, J.F.

    2000-01-01

    An experimental approach is used to evaluate turbine airfoil cooling designs for advanced gas turbine engine applications by incorporating double-wall film-cooled design features into large-scale flat plate specimens. An infrared (IR) imaging system is used to make detailed, two-dimensional steady-state measurements of flat plate surface temperature with spatial resolution on the order of 0.4 mm. The technique employs a cooled zinc selenide window transparent to infrared radiation and calibrates the IR temperature readings to reference thermocouples embedded in each specimen, yielding a surface temperature measurement accuracy of {+-} 4 C. With minimal thermocouple installation required, the flat plate/IR approach is cost effective, essentially nonintrusive, and produces abundant results quickly. Design concepts can proceed from art to part to data in a manner consistent with aggressive development schedules. The infrared technique is demonstrated here by considering the effect of film hole injection angle for a staggered array of film cooling holes integrated with a highly effective internal cooling pattern. Heated free stream air and room temperature cooling air are used to produce a nominal temperature ratio of 2 over a range of blowing ratios from 0.7 to 1.5. Results were obtained at hole angles of 90 and 30 deg for two different hole spacings and are presented in terms of overall cooling effectiveness.

  19. Experimental Methods Using Photogrammetric Techniques for Parachute Canopy Shape Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Thomas W.; Downey, James M.; Lunsford, Charles B.; Desabrais, Kenneth J.; Noetscher, Gregory

    2007-01-01

    NASA Langley Research Center in partnership with the U.S. Army Natick Soldier Center has collaborated on the development of a payload instrumentation package to record the physical parameters observed during parachute air drop tests. The instrumentation package records a variety of parameters including canopy shape, suspension line loads, payload 3-axis acceleration, and payload velocity. This report discusses the instrumentation design and development process, as well as the photogrammetric measurement technique used to provide shape measurements. The scaled model tests were conducted in the NASA Glenn Plum Brook Space Propulsion Facility, OH.

  20. Entropy-Based Search Algorithm for Experimental Design

    NASA Astrophysics Data System (ADS)

    Malakar, N. K.; Knuth, K. H.

    2011-03-01

    The scientific method relies on the iterated processes of inference and inquiry. The inference phase consists of selecting the most probable models based on the available data; whereas the inquiry phase consists of using what is known about the models to select the most relevant experiment. Optimizing inquiry involves searching the parameterized space of experiments to select the experiment that promises, on average, to be maximally informative. In the case where it is important to learn about each of the model parameters, the relevance of an experiment is quantified by Shannon entropy of the distribution of experimental outcomes predicted by a probable set of models. If the set of potential experiments is described by many parameters, we must search this high-dimensional entropy space. Brute force search methods will be slow and computationally expensive. We present an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment for efficient experimental design. This algorithm is inspired by Skilling's nested sampling algorithm used in inference and borrows the concept of a rising threshold while a set of experiment samples are maintained. We demonstrate that this algorithm not only selects highly relevant experiments, but also is more efficient than brute force search. Such entropic search techniques promise to greatly benefit autonomous experimental design.

  1. Web Based Learning Support for Experimental Design in Molecular Biology.

    ERIC Educational Resources Information Center

    Wilmsen, Tinri; Bisseling, Ton; Hartog, Rob

    An important learning goal of a molecular biology curriculum is a certain proficiency level in experimental design. Currently students are confronted with experimental approaches in textbooks, in lectures and in the laboratory. However, most students do not reach a satisfactory level of competence in the design of experimental approaches. This…

  2. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Levine, William S.; Barlow, Jewel

    1993-01-01

    This report summarizes the work that was done on the project from 1 Apr. 1992 to 31 Mar. 1993. The main goal of this research is to develop a practical tool for rotorcraft control system design based on interactive optimization tools (CONSOL-OPTCAD) and classical rotorcraft design considerations (ADOCS). This approach enables the designer to combine engineering intuition and experience with parametric optimization. The combination should make it possible to produce a better design faster than would be possible using either pure optimization or pure intuition and experience. We emphasize that the goal of this project is not to develop an algorithm. It is to develop a tool. We want to keep the human designer in the design process to take advantage of his or her experience and creativity. The role of the computer is to perform the calculation necessary to improve and to display the performance of the nominal design. Briefly, during the first year we have connected CONSOL-OPTCAD, an existing software package for optimizing parameters with respect to multiple performance criteria, to a simplified nonlinear simulation of the UH-60 rotorcraft. We have also created mathematical approximations to the Mil-specs for rotorcraft handling qualities and input them into CONSOL-OPTCAD. Finally, we have developed the additional software necessary to use CONSOL-OPTCAD for the design of rotorcraft controllers.

  3. The transcendental meditation technique and acute experimental pain.

    PubMed

    Mills, W W; Farrow, J T

    1981-04-01

    The Transcendental Meditation (TM) technique decreases the distress associated with the experience of acute experimental pain. Fifteen advanced mediators and 15 controls were administered the cold pressor test before and after a 20 minute period of meditation (TM group) or relaxation (control group). Verbal reports of the intensity of pain sensation and pain distress were obtained at intervals during the cold pressor trials. Skin resistance and heart rate were measured throughout. The mean distress level for the TM group was significantly lower than controls during both trials; the mean pain sensation level for the TM group did not differ significantly from controls during either trial. Heart rate and skin resistant changed for both groups in the expected manner, with no significant differences between groups. The validity, implications, and possible causes of these results are discussed.

  4. ESR dating: is it still an 'experimental' technique?

    PubMed

    Skinner, A R

    2000-05-01

    Nearly 25 years ago, Motoji Ikeya demonstrated the potential of ESR dating. From a single substance (stalagmitic carbonate) and a single site (Akiyoshi Cavern), the field has grown to include materials from all over the world and time periods from a few thousand years ago to several million years ago. A vigorous program of instrumentation development has increased the precision of measurements as well as opening up new ways of collecting and interpreting spectra. Yet there are still references to ESR dating as an 'experimental' technique, one which cannot be trusted to produce dates that are accurate or precise. This paper discusses areas for which this is true and suggests what should be done to convince skeptics. Other areas for which the evidence suggests that ESR is at least as reliable as 'standard' methods will also be covered.

  5. Experimental Verification of Structural-Acoustic Modelling and Design Optimization

    NASA Astrophysics Data System (ADS)

    MARBURG, S.; BEER, H.-J.; GIER, J.; HARDTKE, H.-J.; RENNERT, R.; PERRET, F.

    2002-05-01

    A number of papers have been published on the simulation of structural-acoustic design optimization. However, extensive work is required to verify these results in practical applications. Herein, a steel box of 1·0×1·1×1·5 m with an external beam structure welded on three surface plates was investigated. This investigation included experimental modal analysis and experimental measurements of certain noise transfer functions (sound pressure at points inside the box due to force excitation at beam structure). Using these experimental data, the finite element model of the structure was tuned to provide similar results. With a first structural mode at less than 20 Hz, the reliable frequency range was identified up to about 60 Hz. Obviously, the finite element model could not be further improved only by mesh refinement. The tuning process will be explained in detail since there was a number of changes that helped to improve the structure. Other changes did not improve the structure. Although this model of the box could be expected as a rather simple structure, it can be considered to be a complex structure for simulation purposes. A defined modification of the physical model verified the simulation model. In a final step, the optimal location of stiffening beam structures was predicted by simulation. Their effect on the noise transfer function was experimentally verified. This paper critically discusses modelling techniques that are applied for structural-acoustic simulation of sedan bodies.

  6. An experimental technique for determining middle ear impedance.

    PubMed

    Blayney, A W; McAvoy, G J; Rice, H J; Williams, K R

    1996-03-01

    A two-microphone technique was used to determine the middle ear impedance of a live subject. The procedure involved the application of standing wave tube theory and the assumption that the ear canal behaves like an homogeneous cylinder with plane acoustic wave propagation up to a certain frequency--2 kHz for the current analysis. During experimentation the subject lay on a bench with his head braced against a wooden fixture. Acoustic pressures were recorded from the ear canal by the use of a spectrum analyser and probe microphones with flexible tips. Resultant impedance curves show middle ear natural frequencies at 831 Hz and 1,970 Hz with high levels of damping. The reactive impedance curves show the influence of stiffness and ossicular mass on middle ear sound transmission. An advantage of the approach is that using features of the recorded data it is possible to calculate the effective probe tip to eardrum distance required for the calculation of the middle ear impedance. The two-microphone technique appears to be a promising tool for assessing healthy and diseased middle ear function. PMID:8725514

  7. Equipment and Experimental Technique For Temperature Measurements In Deep Boreholes

    NASA Astrophysics Data System (ADS)

    Khristoforov, A.

    The technique of temperature measurements is highly informative since any dynami- cal processes in the boreholes and in the vicinities are accompanied by thermal effects. Electronics and equipment for remote measurements in the boreholes are briefly dis- cussed in the report. It includes a deep instrument, cable winch and surface recording unit placed onboard a car. The temperature dependent frequency modulated signal is used in deep instrument. A cable of original construction was developed for chute-lift operations. It has a signal and power channel at the same time and play the depth me- ter. The surface recording unit includes power supply for deep instruments, receiver, frequency meter and indicator. A personal computer is used for the measurement nu- merical control. Energy for the electronics is supplied by a car battery. Self sufficiency and high accuracy are specialities of the equipment. Using the technique and equip- ment we made the experimental study of temperature in the boreholes of the East European platform, Middle Asia, West Siberia, Kamchatka and other regions. Most of our temperatures and temperature gradients have been used for mapping.

  8. Cloud Computing Techniques for Space Mission Design

    NASA Technical Reports Server (NTRS)

    Arrieta, Juan; Senent, Juan

    2014-01-01

    The overarching objective of space mission design is to tackle complex problems producing better results, and faster. In developing the methods and tools to fulfill this objective, the user interacts with the different layers of a computing system.

  9. CMOS-array design-automation techniques

    NASA Technical Reports Server (NTRS)

    Feller, A.; Lombardt, T.

    1979-01-01

    Thirty four page report discusses design of 4,096-bit complementary metal oxide semiconductor (CMOS) read-only memory (ROM). CMOSROM is either mask or laser programable. Report is divided into six sections; section one describes background of ROM chips; section two presents design goals for chip; section three discusses chip implementation and chip statistics; conclusions and recommendations are given in sections four thru six.

  10. Nonlinear potential analysis techniques for supersonic-hypersonic configuration design

    NASA Technical Reports Server (NTRS)

    Clever, W. C.; Shankar, V.

    1983-01-01

    Approximate nonlinear inviscid theoretical techniques for predicting aerodynamic characteristics and surface pressures for relatively slender vehicles at moderate hypersonic speeds were developed. Emphasis was placed on approaches that would be responsive to preliminary configuration design level of effort. Second order small disturbance and full potential theory was utilized to meet this objective. Numerical pilot codes were developed for relatively general three dimensional geometries to evaluate the capability of the approximate equations of motion considered. Results from the computations indicate good agreement with higher order solutions and experimental results for a variety of wing, body and wing-body shapes for values of the hypersonic similarity parameter M delta approaching one. Case computational times of a minute were achieved for practical aircraft arrangements.

  11. Damage tolerant design using collapse techniques

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1982-01-01

    A new approach to the design of structures for improved global damage tolerance is presented. In its undamaged condition the structure is designed subject to strength, displacement and buckling constraints. In the damaged condition the only constraint is that the structure will not collapse. The collapse load calculation is formulated as a maximization problem and solved by an interior extended penalty function. The design for minimum weight subject to constraints on the undamaged structure and a specified level of the collapse load is a minimization problem which is also solved by a penalty function formulation. Thus the overall problem is of a nested or multilevel optimization. Examples are presented to demonstrate the difference between the present and more traditional approaches.

  12. Techniques for designing rotorcraft control systems

    NASA Technical Reports Server (NTRS)

    Yudilevitch, Gil; Levine, William S.

    1994-01-01

    Over the last two and a half years we have been demonstrating a new methodology for the design of rotorcraft flight control systems (FCS) to meet handling qualities requirements. This method is based on multicriterion optimization as implemented in the optimization package CONSOL-OPTCAD (C-O). This package has been developed at the Institute for Systems Research (ISR) at the University of Maryland at College Park. This design methodology has been applied to the design of a FCS for the UH-60A helicopter in hover having the ADOCS control structure. The controller parameters have been optimized to meet the ADS-33C specifications. Furthermore, using this approach, an optimal (minimum control energy) controller has been obtained and trade-off studies have been performed.

  13. Techniques for Molecular Imaging Probe Design

    PubMed Central

    Reynolds, Fred; Kelly, Kimberly A.

    2011-01-01

    Molecular imaging allows clinicians to visualize disease specific molecules, thereby providing relevant information in the diagnosis and treatment of patients. With advances in genomics and proteomics and underlying mechanisms of disease pathology, the number of targets identified has significantly outpaced the number of developed molecular imaging probes. There has been a concerted effort to bridge this gap with multidisciplinary efforts in chemistry, proteomics, physics, material science, and biology; all essential to progress in molecular imaging probe development. In this review, we will discuss target selection, screening techniques and probe optimization with the aim of developing clinically relevant molecularly targeted imaging agents. PMID:22201532

  14. Conceptual design report, CEBAF basic experimental equipment

    SciTech Connect

    1990-04-13

    The Continuous Electron Beam Accelerator Facility (CEBAF) will be dedicated to basic research in Nuclear Physics using electrons and photons as projectiles. The accelerator configuration allows three nearly continuous beams to be delivered simultaneously in three experimental halls, which will be equipped with complementary sets of instruments: Hall A--two high resolution magnetic spectrometers; Hall B--a large acceptance magnetic spectrometer; Hall C--a high-momentum, moderate resolution, magnetic spectrometer and a variety of more dedicated instruments. This report contains a short description of the initial complement of experimental equipment to be installed in each of the three halls.

  15. Verification of Experimental Techniques for Flow Surface Determination

    NASA Technical Reports Server (NTRS)

    Lissenden, Cliff J.; Lerch, Bradley A.; Ellis, John R.; Robinson, David N.

    1996-01-01

    The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory. However, at elevated temperatures, material response can be highly time-dependent, which is beyond the realm of classical plasticity. Viscoplastic theories have been developed for just such conditions. In viscoplastic theories, the flow law is given in terms of inelastic strain rate rather than the inelastic strain increment used in time-independent plasticity. Thus, surfaces of constant inelastic strain rate or flow surfaces are to viscoplastic theories what yield surfaces are to classical plasticity. The purpose of the work reported herein was to validate experimental procedures for determining flow surfaces at elevated temperatures. Since experimental procedures for determining yield surfaces in axial/torsional stress space are well established, they were employed -- except inelastic strain rates were used rather than total inelastic strains. In yield-surface determinations, the use of small-offset definitions of yield minimizes the change of material state and allows multiple loadings to be applied to a single specimen. The key to the experiments reported here was precise, decoupled measurement of axial and torsional strain. With this requirement in mind, the performance of a high-temperature multi-axial extensometer was evaluated by comparing its results with strain gauge results at room temperature. Both the extensometer and strain gauges gave nearly identical yield surfaces (both initial and subsequent) for type 316 stainless steel (316 SS). The extensometer also successfully determined flow surfaces for 316 SS at 650 C. Furthermore, to judge the applicability of the technique for composite materials, yield surfaces were determined for unidirectional tungsten/Kanthal (Fe-Cr-Al).

  16. Skin Microbiome Surveys Are Strongly Influenced by Experimental Design.

    PubMed

    Meisel, Jacquelyn S; Hannigan, Geoffrey D; Tyldsley, Amanda S; SanMiguel, Adam J; Hodkinson, Brendan P; Zheng, Qi; Grice, Elizabeth A

    2016-05-01

    Culture-independent studies to characterize skin microbiota are increasingly common, due in part to affordable and accessible sequencing and analysis platforms. Compared to culture-based techniques, DNA sequencing of the bacterial 16S ribosomal RNA (rRNA) gene or whole metagenome shotgun (WMS) sequencing provides more precise microbial community characterizations. Most widely used protocols were developed to characterize microbiota of other habitats (i.e., gastrointestinal) and have not been systematically compared for their utility in skin microbiome surveys. Here we establish a resource for the cutaneous research community to guide experimental design in characterizing skin microbiota. We compare two widely sequenced regions of the 16S rRNA gene to WMS sequencing for recapitulating skin microbiome community composition, diversity, and genetic functional enrichment. We show that WMS sequencing most accurately recapitulates microbial communities, but sequencing of hypervariable regions 1-3 of the 16S rRNA gene provides highly similar results. Sequencing of hypervariable region 4 poorly captures skin commensal microbiota, especially Propionibacterium. WMS sequencing, which is resource and cost intensive, provides evidence of a community's functional potential; however, metagenome predictions based on 16S rRNA sequence tags closely approximate WMS genetic functional profiles. This study highlights the importance of experimental design for downstream results in skin microbiome surveys. PMID:26829039

  17. Experimental Stream Facility: Design and Research

    EPA Science Inventory

    The Experimental Stream Facility (ESF) is a valuable research tool for the U.S. Environmental Protection Agency’s (EPA) Office of Research and Development’s (ORD) laboratories in Cincinnati, Ohio. This brochure describes the ESF, which is one of only a handful of research facilit...

  18. Experimental Reality: Principles for the Design of Augmented Environments

    NASA Astrophysics Data System (ADS)

    Lahlou, Saadi

    The Laboratory of Design for Cognition at EDF R&D (LDC) is a living laboratory, which we created to develop Augmented Environment (AE) for collaborative work, more specifically “cognitive work” (white collars, engineers, office workers). It is a corporate laboratory in a large industry, where natural activity of real users is observed in a continuous manner in various spaces (project space, meeting room, lounge, etc.) The RAO room, an augmented meeting room, is used daily for “normal” meetings; it is also the “mother room” of all augmented meeting rooms in the company, where new systems, services, and devices are tested. The LDC has gathered a unique set of data on the use of AE, and developed various observation and design techniques, described in this chapter. LDC uses novel techniques of digital ethnography, some of which were invented there (SubCam, offsat) and some of which were developed elsewhere and adapted (360° video, WebDiver, etc.). At LDC, some new theories have also been developed to explain behavior and guide innovation: cognitive attractors, experimental reality, and the triple-determination framework.

  19. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  20. Practical Motivational Techniques for Preservice Teachers and Instructional Design Strategies.

    ERIC Educational Resources Information Center

    Schnackenberg, Heidi L.

    This paper describes educational units for preservice teachers that pertain to specific practical motivational techniques for the preservice teachers to use in their classrooms (grades K-12). The units are designed so that students will be able to name four motivational techniques, select the strategy that exemplifies a motivational technique, and…

  1. The Implications of "Contamination" for Experimental Design in Education

    ERIC Educational Resources Information Center

    Rhoads, Christopher H.

    2011-01-01

    Experimental designs that randomly assign entire clusters of individuals (e.g., schools and classrooms) to treatments are frequently advocated as a way of guarding against contamination of the estimated average causal effect of treatment. However, in the absence of contamination, experimental designs that randomly assign intact clusters to…

  2. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    ERIC Educational Resources Information Center

    Deane, Thomas; Nomme, Kathy; Jeffery, Erica; Pollock, Carol; Birol, Gulnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non-expert-like thinking in students and to evaluate the…

  3. Autism genetics: Methodological issues and experimental design.

    PubMed

    Sacco, Roberto; Lintas, Carla; Persico, Antonio M

    2015-10-01

    Autism is a complex neuropsychiatric disorder of developmental origin, where multiple genetic and environmental factors likely interact resulting in a clinical continuum between "affected" and "unaffected" individuals in the general population. During the last two decades, relevant progress has been made in identifying chromosomal regions and genes in linkage or association with autism, but no single gene has emerged as a major cause of disease in a large number of patients. The purpose of this paper is to discuss specific methodological issues and experimental strategies in autism genetic research, based on fourteen years of experience in patient recruitment and association studies of autism spectrum disorder in Italy.

  4. Experimental Design for Vector Output Systems

    PubMed Central

    Banks, H.T.; Rehm, K.L.

    2013-01-01

    We formulate an optimal design problem for the selection of best states to observe and optimal sampling times for parameter estimation or inverse problems involving complex nonlinear dynamical systems. An iterative algorithm for implementation of the resulting methodology is proposed. Its use and efficacy is illustrated on two applied problems of practical interest: (i) dynamic models of HIV progression and (ii) modeling of the Calvin cycle in plant metabolism and growth. PMID:24563655

  5. Information measures in nonlinear experimental design

    NASA Technical Reports Server (NTRS)

    Niple, E.; Shaw, J. H.

    1980-01-01

    Some different approaches to the problem of designing experiments which estimate the parameters of nonlinear models are discussed. The assumption in these approaches that the information in a set of data can be represented by a scalar is criticized, and the nonscalar discrimination information is proposed as the proper measure to use. The two-step decay example in Box and Lucas (1959) is used to illustrate the main points of the discussion.

  6. Relation between experimental and non-experimental study designs. HB vaccines: a case study

    PubMed Central

    Jefferson, T.; Demicheli, V.

    1999-01-01

    STUDY OBJECTIVE: To examine the relation between experimental and non- experimental study design in vaccinology. DESIGN: Assessment of each study design's capability of testing four aspects of vaccine performance, namely immunogenicity (the capacity to stimulate the immune system), duration of immunity conferred, incidence and seriousness of side effects, and number of infections prevented by vaccination. SETTING: Experimental and non-experimental studies on hepatitis B (HB) vaccines in the Cochrane Vaccines Field Database. RESULTS: Experimental and non-experimental vaccine study designs are frequently complementary but some aspects of vaccine quality can only be assessed by one of the types of study. More work needs to be done on the relation between study quality and its significance in terms of effect size.   PMID:10326054

  7. Collimator design for experimental minibeam radiation therapy

    SciTech Connect

    Babcock, Kerry; Sidhu, Narinder; Kundapur, Vijayananda; Ali, Kaiser

    2011-04-15

    Purpose: To design and optimize a minibeam collimator for minibeam radiation therapy studies using a 250 kVp x-ray machine as a simulated synchrotron source. Methods: A Philips RT250 orthovoltage x-ray machine was modeled using the EGSnrc/BEAMnrc Monte Carlo software. The resulting machine model was coupled to a model of a minibeam collimator with a beam aperture of 1 mm. Interaperture spacing and collimator thickness were varied to produce a minibeam with the desired peak-to-valley ratio. Results: Proper design of a minibeam collimator with Monte Carlo methods requires detailed knowledge of the x-ray source setup. For a cathode-ray tube source, the beam spot size, target angle, and source shielding all determine the final valley-to-peak dose ratio. Conclusions: A minibeam collimator setup was created, which can deliver a 30 Gy peak dose minibeam radiation therapy treatment at depths less than 1 cm with a valley-to-peak dose ratio on the order of 23%.

  8. Experimental Design for Composite Face Transplantation.

    PubMed

    Park, Jihoon; Yim, Sangjun; Eun, Seok-Chan

    2016-06-01

    Face allotransplantation represents a novel frontier in complex human facial defect reconstruction. To develop more refined surgical techniques and yield fine results, it is first imperative to make a suitable animal model. The development of a composite facial allograft model in swine is more appealing: the facial anatomy, including facial nerve and vascular anatomy, is similar to that of humans. Two operative teams performed simultaneously, one assigned to harvest the donor and the other to prepare the recipient in efforts to shorten operative time. The flap was harvested with the common carotid artery and external jugular vein, and it was transferred to the recipient. After insetting the maxilla, mandible, muscles, and skins, the anastomosis of the external jugular vein, external carotid artery, and facial nerve were performed. The total mean time of transplantation was 7 hours, and most allografts survived without vascular problems. The authors documented that this model is well qualified to be used as a standard transplantation training model and future research work, in every aspect. PMID:27244198

  9. Experimental Techniques Verified for Determining Yield and Flow Surfaces

    NASA Technical Reports Server (NTRS)

    Lerch, Brad A.; Ellis, Rod; Lissenden, Cliff J.

    1998-01-01

    Structural components in aircraft engines are subjected to multiaxial loads when in service. For such components, life prediction methodologies are dependent on the accuracy of the constitutive models that determine the elastic and inelastic portions of a loading cycle. A threshold surface (such as a yield surface) is customarily used to differentiate between reversible and irreversible flow. For elastoplastic materials, a yield surface can be used to delimit the elastic region in a given stress space. The concept of a yield surface is central to the mathematical formulation of a classical plasticity theory, but at elevated temperatures, material response can be highly time dependent. Thus, viscoplastic theories have been developed to account for this time dependency. Since the key to many of these theories is experimental validation, the objective of this work (refs. 1 and 2) at the NASA Lewis Research Center was to verify that current laboratory techniques and equipment are sufficient to determine flow surfaces at elevated temperatures. By probing many times in the axial-torsional stress space, we could define the yield and flow surfaces. A small offset definition of yield (10 me) was used to delineate the boundary between reversible and irreversible behavior so that the material state remained essentially unchanged and multiple probes could be done on the same specimen. The strain was measured with an off-the-shelf multiaxial extensometer that could measure the axial and torsional strains over a wide range of temperatures. The accuracy and resolution of this extensometer was verified by comparing its data with strain gauge data at room temperature. The extensometer was found to have sufficient resolution for these experiments. In addition, the amount of crosstalk (i.e., the accumulation of apparent strain in one direction when strain in the other direction is applied) was found to be negligible. Tubular specimens were induction heated to determine the flow

  10. An investigation of radiometer design using digital processing techniques

    NASA Technical Reports Server (NTRS)

    Lawrence, R. W.

    1981-01-01

    The use of digital signal processing techniques in Dicke switching radiometer design was investigated. The general approach was to develop an analytical model of the existing analog radiometer and identify factors which adversly affect its performance. A digital processor was then proposed to verify the feasibility of using digital techniques to minimize these adverse effects and improve the radiometer performance. Analysis and preliminary test results comparing the digital and analog processing approaches in radiometers design were analyzed.

  11. Tabletop Games: Platforms, Experimental Games and Design Recommendations

    NASA Astrophysics Data System (ADS)

    Haller, Michael; Forlines, Clifton; Koeffel, Christina; Leitner, Jakob; Shen, Chia

    While the last decade has seen massive improvements in not only the rendering quality, but also the overall performance of console and desktop video games, these improvements have not necessarily led to a greater population of video game players. In addition to continuing these improvements, the video game industry is also constantly searching for new ways to convert non-players into dedicated gamers. Despite the growing popularity of computer-based video games, people still love to play traditional board games, such as Risk, Monopoly, and Trivial Pursuit. Both video and board games have their strengths and weaknesses, and an intriguing conclusion is to merge both worlds. We believe that a tabletop form-factor provides an ideal interface for digital board games. The design and implementation of tabletop games will be influenced by the hardware platforms, form factors, sensing technologies, as well as input techniques and devices that are available and chosen. This chapter is divided into three major sections. In the first section, we describe the most recent tabletop hardware technologies that have been used by tabletop researchers and practitioners. In the second section, we discuss a set of experimental tabletop games. The third section presents ten evaluation heuristics for tabletop game design.

  12. Intracanal placement of calcium hydroxide: a comparison of specially designed paste carrier technique with other techniques

    PubMed Central

    2013-01-01

    Background This study compared the effectiveness of a Specially Designed Paste Carrier technique with the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. Methods Three groups, each containing 15 single-rooted human anterior teeth were prepared using standardized Mtwo rotary instruments to a master apical file size 40 with 0.04 taper. Each group was filled with calcium hydroxide paste using: Syringe and #25 finger spreader (Group 1); Syringe and #4 rotary Lentulo spiral (Group 2), Specially Designed Paste Carrier (Group 3). Using pre-filling and post-filling radiographs in buccolingual and mesiodistal planes, the radiodensities at 1 mm, 3 mm, 5 mm, and 7 mm from the apical foramen were analyzed by ANOVA and Bonferroni post hoc tests. Results Overall, The Specially Designed Paste Carrier technique showed a statistically significantly higher mean radiodensity than the two other compared techniques. No significant difference was detected between the Syringe-Lentulo spiral and the Syringe-Spreader techniques. Conclusion The Specially Designed Paste Carrier technique was more effective than the Syringe-Spreader technique and the Syringe-Lentulo spiral technique in the intracanal placement of calcium hydroxide. PMID:24098931

  13. Simultaneous optimal experimental design for in vitro binding parameter estimation.

    PubMed

    Ernest, C Steven; Karlsson, Mats O; Hooker, Andrew C

    2013-10-01

    Simultaneous optimization of in vitro ligand binding studies using an optimal design software package that can incorporate multiple design variables through non-linear mixed effect models and provide a general optimized design regardless of the binding site capacity and relative binding rates for a two binding system. Experimental design optimization was employed with D- and ED-optimality using PopED 2.8 including commonly encountered factors during experimentation (residual error, between experiment variability and non-specific binding) for in vitro ligand binding experiments: association, dissociation, equilibrium and non-specific binding experiments. Moreover, a method for optimizing several design parameters (ligand concentrations, measurement times and total number of samples) was examined. With changes in relative binding site density and relative binding rates, different measurement times and ligand concentrations were needed to provide precise estimation of binding parameters. However, using optimized design variables, significant reductions in number of samples provided as good or better precision of the parameter estimates compared to the original extensive sampling design. Employing ED-optimality led to a general experimental design regardless of the relative binding site density and relative binding rates. Precision of the parameter estimates were as good as the extensive sampling design for most parameters and better for the poorly estimated parameters. Optimized designs for in vitro ligand binding studies provided robust parameter estimation while allowing more efficient and cost effective experimentation by reducing the measurement times and separate ligand concentrations required and in some cases, the total number of samples. PMID:23943088

  14. Extended mapping and characteristics techniques for inverse aerodynamic design

    NASA Technical Reports Server (NTRS)

    Sobieczky, H.; Qian, Y. J.

    1991-01-01

    Some ideas for using hodograph theory, mapping techniques and methods of characteristics to formulate typical aerodynamic design boundary value problems are developed. The inverse method of characteristics is shown to be a fast tool for design of transonic flow elements as well as supersonic flows with given shock waves.

  15. Experimental Design: Utilizing Microsoft Mathematics in Teaching and Learning Calculus

    ERIC Educational Resources Information Center

    Oktaviyanthi, Rina; Supriani, Yani

    2015-01-01

    The experimental design was conducted to investigate the use of Microsoft Mathematics, free software made by Microsoft Corporation, in teaching and learning Calculus. This paper reports results from experimental study details on implementation of Microsoft Mathematics in Calculus, students' achievement and the effects of the use of Microsoft…

  16. Design Issues and Inference in Experimental L2 Research

    ERIC Educational Resources Information Center

    Hudson, Thom; Llosa, Lorena

    2015-01-01

    Explicit attention to research design issues is essential in experimental second language (L2) research. Too often, however, such careful attention is not paid. This article examines some of the issues surrounding experimental L2 research and its relationships to causal inferences. It discusses the place of research questions and hypotheses,…

  17. A Bayesian experimental design approach to structural health monitoring

    SciTech Connect

    Farrar, Charles; Flynn, Eric; Todd, Michael

    2010-01-01

    Optimal system design for SHM involves two primarily challenges. The first is the derivation of a proper performance function for a given system design. The second is the development of an efficient optimization algorithm for choosing a design that maximizes, or nearly maximizes the performance function. In this paper we will outline how an SHM practitioner can construct the proper performance function by casting the entire design problem into a framework of Bayesian experimental design. The approach demonstrates how the design problem necessarily ties together all steps of the SHM process.

  18. Flight control system design factors for applying automated testing techniques

    NASA Technical Reports Server (NTRS)

    Sitz, Joel R.; Vernon, Todd H.

    1990-01-01

    Automated validation of flight-critical embedded systems is being done at ARC Dryden Flight Research Facility. The automated testing techniques are being used to perform closed-loop validation of man-rated flight control systems. The principal design features and operational experiences of the X-29 forward-swept-wing aircraft and F-18 High Alpha Research Vehicle (HARV) automated test systems are discussed. Operationally applying automated testing techniques has accentuated flight control system features that either help or hinder the application of these techniques. The paper also discusses flight control system features which foster the use of automated testing techniques.

  19. Fundamentals of experimental design: lessons from beyond the textbook world

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We often think of experimental designs as analogous to recipes in a cookbook. We look for something that we like and frequently return to those that have become our long-standing favorites. We can easily become complacent, favoring the tried-and-true designs (or recipes) over those that contain unkn...

  20. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    ERIC Educational Resources Information Center

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  1. Experimental Design and Multiplexed Modeling Using Titrimetry and Spreadsheets

    NASA Astrophysics Data System (ADS)

    Harrington, Peter De B.; Kolbrich, Erin; Cline, Jennifer

    2002-07-01

    The topics of experimental design and modeling are important for inclusion in the undergraduate curriculum. Many general chemistry and quantitative analysis courses introduce students to spreadsheet programs, such as MS Excel. Students in the laboratory sections of these courses use titrimetry as a quantitative measurement method. Unfortunately, the only model that students may be exposed to in introductory chemistry courses is the working curve that uses the linear model. A novel experiment based on a multiplex model has been devised for titrating several vinegar samples at a time. The multiplex titration can be applied to many other routine determinations. An experimental design model is fit to titrimetric measurements using the MS Excel LINEST function to estimate concentration from each sample. This experiment provides valuable lessons in error analysis, Class A glassware tolerances, experimental simulation, statistics, modeling, and experimental design.

  2. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  3. Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

    PubMed

    Reed Johnson, F; Lancsar, Emily; Marshall, Deborah; Kilambi, Vikram; Mühlbacher, Axel; Regier, Dean A; Bresnahan, Brian W; Kanninen, Barbara; Bridges, John F P

    2013-01-01

    Stated-preference methods are a class of evaluation techniques for studying the preferences of patients and other stakeholders. While these methods span a variety of techniques, conjoint-analysis methods-and particularly discrete-choice experiments (DCEs)-have become the most frequently applied approach in health care in recent years. Experimental design is an important stage in the development of such methods, but establishing a consensus on standards is hampered by lack of understanding of available techniques and software. This report builds on the previous ISPOR Conjoint Analysis Task Force Report: Conjoint Analysis Applications in Health-A Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. This report aims to assist researchers specifically in evaluating alternative approaches to experimental design, a difficult and important element of successful DCEs. While this report does not endorse any specific approach, it does provide a guide for choosing an approach that is appropriate for a particular study. In particular, it provides an overview of the role of experimental designs for the successful implementation of the DCE approach in health care studies, and it provides researchers with an introduction to constructing experimental designs on the basis of study objectives and the statistical model researchers have selected for the study. The report outlines the theoretical requirements for designs that identify choice-model preference parameters and summarizes and compares a number of available approaches for constructing experimental designs. The task-force leadership group met via bimonthly teleconferences and in person at ISPOR meetings in the United States and Europe. An international group of experimental-design experts was consulted during this process to discuss existing approaches for experimental design and to review the task force's draft reports. In addition, ISPOR members contributed to developing a consensus

  4. Using an Animal Group Vigilance Practical Session to Give Learners a "Heads-Up" to Problems in Experimental Design

    ERIC Educational Resources Information Center

    Rands, Sean A.

    2011-01-01

    The design of experimental ecological fieldwork is difficult to teach to classes, particularly when protocols for data collection are normally carefully controlled by the class organiser. Normally, reinforcement of the some problems of experimental design such as the avoidance of pseudoreplication and appropriate sampling techniques does not occur…

  5. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  6. Determination of dynamic fracture toughness using a new experimental technique

    NASA Astrophysics Data System (ADS)

    Cady, Carl M.; Liu, Cheng; Lovato, Manuel L.

    2015-09-01

    In other studies dynamic fracture toughness has been measured using Charpy impact and modified Hopkinson Bar techniques. In this paper results will be shown for the measurement of fracture toughness using a new test geometry. The crack propagation velocities range from ˜0.15 mm/s to 2.5 m/s. Digital image correlation (DIC) will be the technique used to measure both the strain and the crack growth rates. The boundary of the crack is determined using the correlation coefficient generated during image analysis and with interframe timing the crack growth rate and crack opening can be determined. A comparison of static and dynamic loading experiments will be made for brittle polymeric materials. The analysis technique presented by Sammis et al. [1] is a semi-empirical solution, however, additional Linear Elastic Fracture Mechanics analysis of the strain fields generated as part of the DIC analysis allow for the more commonly used method resembling the crack tip opening displacement (CTOD) experiment. It should be noted that this technique was developed because limited amounts of material were available and crack growth rates were to fast for a standard CTOD method.

  7. Enhanced Multiobjective Optimization Technique for Comprehensive Aerospace Design. Part A

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John N.

    1997-01-01

    A multidisciplinary design optimization procedure which couples formal multiobjectives based techniques and complex analysis procedures (such as computational fluid dynamics (CFD) codes) developed. The procedure has been demonstrated on a specific high speed flow application involving aerodynamics and acoustics (sonic boom minimization). In order to account for multiple design objectives arising from complex performance requirements, multiobjective formulation techniques are used to formulate the optimization problem. Techniques to enhance the existing Kreisselmeier-Steinhauser (K-S) function multiobjective formulation approach have been developed. The K-S function procedure used in the proposed work transforms a constrained multiple objective functions problem into an unconstrained problem which then is solved using the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. Weight factors are introduced during the transformation process to each objective function. This enhanced procedure will provide the designer the capability to emphasize specific design objectives during the optimization process. The demonstration of the procedure utilizes a computational Fluid dynamics (CFD) code which solves the three-dimensional parabolized Navier-Stokes (PNS) equations for the flow field along with an appropriate sonic boom evaluation procedure thus introducing both aerodynamic performance as well as sonic boom as the design objectives to be optimized simultaneously. Sensitivity analysis is performed using a discrete differentiation approach. An approximation technique has been used within the optimizer to improve the overall computational efficiency of the procedure in order to make it suitable for design applications in an industrial setting.

  8. [Experimental liver transplantation in pigs. Surgical technique and complications].

    PubMed

    Laino, G M; Anastasi, A; Fabbri, L P; Gandini, E; Valanzano, R; Fontanari, P; Venneri, F; Mazzoni, P; Ieri, A; Spini, S; Scalzi, E; Batignani, G

    1996-10-01

    Only recently, in our laboratory of experimental surgery, we started with a protocol for orthotopic liver transplantation (OLT) in a pig model. This was felt as mandatory for experimental purposes as well as for future clinical applications at our center. We report herein our own experience with 41 OLTx. Intraoperative "lethal" complications occurred in up to 32% (14/41) whereas postoperative complications occurred in the remainders at different intervals of time with a maximum survival of 30 days. No attention was paid to prevent rejection-infection episodes. The main cause of death was the primary non-function (PNF) or dis-function (PDF) manifested either intra or postoperatively in 16 out the 41 OLTx (39%). Intraoperative technical errors accounted for up to 9% (4/41 OLTx). Acute hemorrhage gastritis and gastric perforations occurred postoperatively in 6 animals (14%) and represent one of the peculiar aspects of OLT in pig model.

  9. Experimental Validation of Simulations Using Full-field Measurement Techniques

    SciTech Connect

    Hack, Erwin

    2010-05-28

    The calibration by reference materials of dynamic full-field measurement systems is discussed together with their use to validate numerical simulations of structural mechanics. The discussion addresses three challenges that are faced in these processes, i.e. how to calibrate a measuring instrument that (i) provides full-field data, and (ii) is dynamic; (iii) how to compare data from simulation and experimentation.

  10. Guided Inquiry in a Biochemistry Laboratory Course Improves Experimental Design Ability

    ERIC Educational Resources Information Center

    Goodey, Nina M.; Talgar, Cigdem P.

    2016-01-01

    Many biochemistry laboratory courses expose students to laboratory techniques through pre-determined experiments in which students follow stepwise protocols provided by the instructor. This approach fails to provide students with sufficient opportunities to practice experimental design and critical thinking. Ten inquiry modules were created for a…

  11. Application of multivariable search techniques to structural design optimization

    NASA Technical Reports Server (NTRS)

    Jones, R. T.; Hague, D. S.

    1972-01-01

    Multivariable optimization techniques are applied to a particular class of minimum weight structural design problems: the design of an axially loaded, pressurized, stiffened cylinder. Minimum weight designs are obtained by a variety of search algorithms: first- and second-order, elemental perturbation, and randomized techniques. An exterior penalty function approach to constrained minimization is employed. Some comparisons are made with solutions obtained by an interior penalty function procedure. In general, it would appear that an interior penalty function approach may not be as well suited to the class of design problems considered as the exterior penalty function approach. It is also shown that a combination of search algorithms will tend to arrive at an extremal design in a more reliable manner than a single algorithm. The effect of incorporating realistic geometrical constraints on stiffener cross-sections is investigated. A limited comparison is made between minimum weight cylinders designed on the basis of a linear stability analysis and cylinders designed on the basis of empirical buckling data. Finally, a technique for locating more than one extremal is demonstrated.

  12. Experimental study of digital image processing techniques for LANDSAT data

    NASA Technical Reports Server (NTRS)

    Rifman, S. S. (Principal Investigator); Allendoerfer, W. B.; Caron, R. H.; Pemberton, L. J.; Mckinnon, D. M.; Polanski, G.; Simon, K. W.

    1976-01-01

    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections.

  13. Characterizing the Experimental Procedure in Science Laboratories: A Preliminary Step towards Students Experimental Design

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cedric; Ney, Muriel; Sanchez, Eric; Wajeman, Claire

    2012-01-01

    Many studies have stressed students' lack of understanding of experiments in laboratories. Some researchers suggest that if students design all or parts of entire experiment, as part of an inquiry-based approach, it would overcome certain difficulties. It requires that a procedure be written for experimental design. The aim of this paper is to…

  14. Development and Validation of a Hypersonic Vehicle Design Tool Based On Waverider Design Technique

    NASA Astrophysics Data System (ADS)

    Dasque, Nastassja

    Methodologies for a tool capable of assisting design initiatives for practical waverider based hypersonic vehicles were developed and validated. The design space for vehicle surfaces was formed using an algorithm that coupled directional derivatives with the conservation laws to determine a flow field defined by a set of post-shock streamlines. The design space is used to construct an ideal waverider with a sharp leading edge. A blunting method was developed to modify the ideal shapes to a more practical geometry for real-world application. Empirical and analytical relations were then systematically applied to the resulting geometries to determine local pressure, skin-friction and heat flux. For the ideal portion of the geometry, flat plate relations for compressible flow were applied. For the blunted portion of the geometry modified Newtonian theory, Fay-Riddell theory and Modified Reynolds analogy were applied. The design and analysis methods were validated using analytical solutions as well as empirical and numerical data. The streamline solution for the flow field generation technique was compared with a Taylor-Maccoll solution and showed very good agreement. The relationship between the local Stanton number and skin friction coefficient with local Reynolds number along the ideal portion of the body showed good agreement with experimental data. In addition, an automated grid generation routine was formulated to construct a structured mesh around resulting geometries in preparation for Computational Fluid Dynamics analysis. The overall analysis of the waverider body using the tool was then compared to CFD studies. The CFD flow field showed very good agreement with the design space. However, the distribution of the surface properties was near CFD results but did not have great agreement.

  15. The design of aircraft using the decision support problem technique

    NASA Technical Reports Server (NTRS)

    Mistree, Farrokh; Marinopoulos, Stergios; Jackson, David M.; Shupe, Jon A.

    1988-01-01

    The Decision Support Problem Technique for unified design, manufacturing and maintenance is being developed at the Systems Design Laboratory at the University of Houston. This involves the development of a domain-independent method (and the associated software) that can be used to process domain-dependent information and thereby provide support for human judgment. In a computer assisted environment, this support is provided in the form of optimal solutions to Decision Support Problems.

  16. The Photoshop Smile Design technique (part 1): digital dental photography.

    PubMed

    McLaren, Edward A; Garber, David A; Figueira, Johan

    2013-01-01

    The proliferation of digital photography and imaging devices is enhancing clinicians' ability to visually document patients' intraoral conditions. By understanding the elements of esthetics and learning how to incorporate technology applications into clinical dentistry, clinicians can predictably plan smile design and communicate anticipated results to patients and ceramists alike. This article discusses camera, lens, and flash selection and setup, and how to execute specific types of images using the Adobe Photoshop Smile Design (PSD) technique.

  17. Experimental Test of New Technique to Overcome Spin Depolarizing Resonances

    SciTech Connect

    Raymond, R. S.; Chao, A. W.; Krisch, A. D.; Leonova, M. A.; Morozov, V. S.; Sivers, D. W.; Wong, V. K.; Ganshvili, A.; Gebel, R.; Lehrach, A.; Lorentz, B.; Maier, R.; Prasuhn, D.; Stockhorst, H.; Welsch, D.; Hinterberger, F.; Kondratenko, A. M.

    2009-08-04

    We recently tested a new spin resonance crossing technique, Kondratenko Crossing (KC) by sweeping an rf solenoid's frequency through an rf-induced spin resonance with both the KC an traditional Fast Crossing (FC) patterns. Using both rf bunched and unbunched 1.85 GeV/c polarized deuterons stored in COSY, we varied the parameters of both crossing patterns. Compared to FC with the same crossing speed, KC reduced the depolarization by measured factors of 4.7+-0.3 and 19+-{sub 5}{sup 12} for unbunched and bunched beams, respectively. This clearly showed the large potential benefit of Kondratenko Crossing over Fast Crossing.

  18. Active Flow Control: Instrumentation Automation and Experimental Technique

    NASA Technical Reports Server (NTRS)

    Gimbert, N. Wes

    1995-01-01

    In investigating the potential of a new actuator for use in an active flow control system, several objectives had to be accomplished, the largest of which was the experimental setup. The work was conducted at the NASA Langley 20x28 Shear Flow Control Tunnel. The actuator named Thunder, is a high deflection piezo device recently developed at Langley Research Center. This research involved setting up the instrumentation, the lighting, the smoke, and the recording devices. The instrumentation was automated by means of a Power Macintosh running LabVIEW, a graphical instrumentation package developed by National Instruments. Routines were written to allow the tunnel conditions to be determined at a given instant at the push of a button. This included determination of tunnel pressures, speed, density, temperature, and viscosity. Other aspects of the experimental equipment included the set up of a CCD video camera with a video frame grabber, monitor, and VCR to capture the motion. A strobe light was used to highlight the smoke that was used to visualize the flow. Additional effort was put into creating a scale drawing of another tunnel on site and a limited literature search in the area of active flow control.

  19. Application of optimization techniques to vehicle design: A review

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Magee, C. L.

    1984-01-01

    The work that has been done in the last decade or so in the application of optimization techniques to vehicle design is discussed. Much of the work reviewed deals with the design of body or suspension (chassis) components for reduced weight. Also reviewed are studies dealing with system optimization problems for improved functional performance, such as ride or handling. In reviewing the work on the use of optimization techniques, one notes the transition from the rare mention of the methods in the 70's to an increased effort in the early 80's. Efficient and convenient optimization and analysis tools still need to be developed so that they can be regularly applied in the early design stage of the vehicle development cycle to be most effective. Based on the reported applications, an attempt is made to assess the potential for automotive application of optimization techniques. The major issue involved remains the creation of quantifiable means of analysis to be used in vehicle design. The conventional process of vehicle design still contains much experience-based input because it has not yet proven possible to quantify all important constraints. This restraint on the part of the analysis will continue to be a major limiting factor in application of optimization to vehicle design.

  20. Optical design for LED dental lighting with imaging optic technique

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Hoon; Bae, Seung-Chul; Lim, Hae-Ryong; Jang, Ja-Soon

    2011-10-01

    We did a research as follows. First of all, selected optimum LEDs and mixed it for higher CRI, target CCT and illuminance. The following step is optical module design. Light directional characteristics of dental lighting must be concentrated to illuminate a part. Because This part is oral cavity, The feature of illumination pattern is rectangular. For uniformity of illuminance and clearer pattern boundary at reference distance, we designed it as direct type (no use reflector) by imaging optic technique. First, Image is rectangular feature, so object must be the same feature with magnification in general imaging optics. But the emitting surface feature of LED (1W grade) is square or circular generally. For that reason, made object as rectangular source with rectangular lightguide. This optical component was designed for higher efficiency by illumination optic technique. Next, we designed optical lenses based on imaging optic technique for image object feature using Code V. set to high NA for light efficiency in this design. Fundamentally, Finally, This product is luminaire so illumination simulation and result analysis were executed by LightTools as illumination design software.

  1. Low Cost Gas Turbine Off-Design Prediction Technique

    NASA Astrophysics Data System (ADS)

    Martinjako, Jeremy

    This thesis seeks to further explore off-design point operation of gas turbines and to examine the capabilities of GasTurb 12 as a tool for off-design analysis. It is a continuation of previous thesis work which initially explored the capabilities of GasTurb 12. The research is conducted in order to: 1) validate GasTurb 12 and, 2) predict off-design performance of the Garrett GTCP85-98D located at the Arizona State University Tempe campus. GasTurb 12 is validated as an off-design point tool by using the program to predict performance of an LM2500+ marine gas turbine. Haglind and Elmegaard (2009) published a paper detailing a second off-design point method and it includes the manufacturer's off-design point data for the LM2500+. GasTurb 12 is used to predict off-design point performance of the LM2500+ and compared to the manufacturer's data. The GasTurb 12 predictions show good correlation. Garrett has published specification data for the GTCP85-98D. This specification data is analyzed to determine the design point and to comment on off-design trends. Arizona State University GTCP85-98D off-design experimental data is evaluated. Trends presented in the data are commented on and explained. The trends match the expected behavior demonstrated in the specification data for the same gas turbine system. It was originally intended that a model of the GTCP85-98D be constructed in GasTurb 12 and used to predict off-design performance. The prediction would be compared to collected experimental data. This is not possible because the free version of GasTurb 12 used in this research does not have a module to model a single spool turboshaft. This module needs to be purchased for this analysis.

  2. Refinement of experimental design and conduct in laboratory animal research.

    PubMed

    Bailoo, Jeremy D; Reichlin, Thomas S; Würbel, Hanno

    2014-01-01

    The scientific literature of laboratory animal research is replete with papers reporting poor reproducibility of results as well as failure to translate results to clinical trials in humans. This may stem in part from poor experimental design and conduct of animal experiments. Despite widespread recognition of these problems and implementation of guidelines to attenuate them, a review of the literature suggests that experimental design and conduct of laboratory animal research are still in need of refinement. This paper will review and discuss possible sources of biases, highlight advantages and limitations of strategies proposed to alleviate them, and provide a conceptual framework for improving the reproducibility of laboratory animal research.

  3. Experimental Design and Power Calculation for RNA-seq Experiments.

    PubMed

    Wu, Zhijin; Wu, Hao

    2016-01-01

    Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER. PMID:27008024

  4. CMOS array design automation techniques. [metal oxide semiconductors

    NASA Technical Reports Server (NTRS)

    Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.

    1975-01-01

    A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.

  5. Application of hazard assessment techniques in the CISF design process

    SciTech Connect

    Thornton, J.R.; Henry, T.

    1997-10-29

    The Department of Energy has submitted to the NRC staff for review a topical safety analysis report (TSAR) for a Centralized Interim Storage Facility (CISF). The TSAR will be used in licensing the CISF when and if a site is designated. CISF1 design events are identified based on thorough review of design basis events (DBEs) previously identified by dry storage system suppliers and licensees and through the application of hazard assessment techniques. A Preliminary Hazards Assessment (PHA) is performed to identify design events applicable to a Phase 1 non site specific CISF. A PHA is deemed necessary since the Phase 1 CISF is distinguishable from previous dry store applications in several significant operational scope and design basis aspects. In addition to assuring all design events applicable to the Phase 1 CISF are identified, the PHA served as an integral part of the CISF design process by identifying potential important to safety and defense in depth facility design and administrative control features. This paper describes the Phase 1 CISF design event identification process and summarizes significant PHA contributions to the CISF design.

  6. Advanced Computational and Experimental Techniques for Nacelle Liner Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Gerhold, Carl H.; Jones, Michael G.; Brown, Martha C.; Nark, Douglas

    2009-01-01

    The Curved Duct Test Rig (CDTR) has been developed to investigate sound propagation through a duct of size comparable to the aft bypass duct of typical aircraft engines. The axial dimension of the bypass duct is often curved and this geometric characteristic is captured in the CDTR. The semiannular bypass duct is simulated by a rectangular test section in which the height corresponds to the circumferential dimension and the width corresponds to the radial dimension. The liner samples are perforate over honeycomb core and are installed on the side walls of the test section. The top and bottom surfaces of the test section are acoustically rigid to simulate a hard wall bifurcation or pylon. A unique feature of the CDTR is the control system that generates sound incident on the liner test section in specific modes. Uniform air flow, at ambient temperature and flow speed Mach 0.275, is introduced through the duct. Experiments to investigate configuration effects such as curvature along the flow path on the acoustic performance of a sample liner are performed in the CDTR and reported in this paper. Combinations of treated and acoustically rigid side walls are investigated. The scattering of modes of the incident wave, both by the curvature and by the asymmetry of wall treatment, is demonstrated in the experimental results. The effect that mode scattering has on total acoustic effectiveness of the liner treatment is also shown. Comparisons of measured liner attenuation with numerical results predicted by an analytic model based on the parabolic approximation to the convected Helmholtz equation are reported. The spectra of attenuation produced by the analytic model are similar to experimental results for both walls treated, straight and curved flow path, with plane wave and higher order modes incident. The numerical model is used to define the optimized resistance and reactance of a liner that significantly improves liner attenuation in the frequency range 1900-2400 Hz. A

  7. A technique for optimizing the design of power semiconductor devices

    NASA Technical Reports Server (NTRS)

    Schlegel, E. S.

    1976-01-01

    A technique is described that provides a basis for predicting whether any device design change will improve or degrade the unavoidable trade-off that must be made between the conduction loss and the turn-off speed of fast-switching high-power thyristors. The technique makes use of a previously reported method by which, for a given design, this trade-off was determined for a wide range of carrier lifetimes. It is shown that by extending this technique, one can predict how other design variables affect this trade-off. The results show that for relatively slow devices the design can be changed to decrease the current gains to improve the turn-off time without significantly degrading the losses. On the other hand, for devices having fast turn-off times design changes can be made to increase the current gain to decrease the losses without a proportionate increase in the turn-off time. Physical explanations for these results are proposed.

  8. Experimental investigation of iterative reconstruction techniques for high resolution mammography

    NASA Astrophysics Data System (ADS)

    Vengrinovich, Valery L.; Zolotarev, Sergei A.; Linev, Vladimir N.

    2014-02-01

    The further development of the new iterative reconstruction algorithms to improve three-dimensional breast images quality restored from incomplete and noisy mammograms, is provided. The algebraic reconstruction method with simultaneous iterations - Simultaneous Algebraic Reconstruction Technique (SART) and the iterative method of statistical reconstruction Bayesian Iterative Reconstruction (BIR) are referred here as the preferable iterative methods suitable to improve the image quality. For better processing we use the Graphics Processing Unit (GPU). Method of minimizing the Total Variation (TV) is used as a priori support for regularization of iteration process and to reduce the level of noise in the reconstructed image. Preliminary results with physical phantoms show that all examined methods are capable to reconstruct structures layer-by-layer and to separate layers which images are overlapped in the Z- direction. It was found that the method of traditional Shift-And-Add tomosynthesis (SAA) is worse than iterative methods SART and BIR in terms of suppression of the anatomical noise and image blurring in between the adjacent layers. Despite of the fact that the measured contrast/noise ratio in the presence of low contrast internal structures is higher for the method of tomosynthesis SAA than for SART and BIR methods, its effectiveness in the presence of structured background is rather poor. In our opinion the optimal results can be achieved using Bayesian iterative reconstruction BIR.

  9. Stem cell clonality -- theoretical concepts, experimental techniques, and clinical challenges.

    PubMed

    Glauche, Ingmar; Bystrykh, Leonid; Eaves, Connie; Roeder, Ingo

    2013-04-01

    Here we report highlights of discussions and results presented at an International Workshop on Concepts and Models of Stem Cell Organization held on July 16th and 17th, 2012 in Dresden, Germany. The goal of the workshop was to undertake a systematic survey of state-of-the-art methods and results of clonality studies of tissue regeneration and maintenance with a particular emphasis on the hematopoietic system. The meeting was the 6th in a series of similar conceptual workshops, termed StemCellMathLab,(2) all of which have had the general objective of using an interdisciplinary approach to discuss specific aspects of stem cell biology. The StemCellMathLab 2012, which was jointly organized by the Institute for Medical Informatics and Biometry, Medical Faculty Carl Gustav Carus, Dresden University of Technology and the Institute for Medical Informatics, Statistics and Epidemiology, Medical Faculty, University of Leipzig, brought together 32 scientists from 8 countries, with scientific backgrounds in medicine, cell biology, virology, physics, computer sciences, bioinformatics and mathematics. The workshop focused on the following questions: (1) How heterogeneous are stem cells and their progeny? and (2) What are the characteristic differences in the clonal dynamics between physiological and pathophysiological situations? In discussing these questions, particular emphasis was placed on (a) the methods for quantifying clones and their dynamics in experimental and clinical settings and (b) general concepts and models for their description. In this workshop summary we start with an introduction to the current state of clonality research and a proposal for clearly defined terminology. Major topics of discussion include clonal heterogeneity in unperturbed tissues, clonal dynamics due to physiological and pathophysiological pressures and conceptual and technical issues of clone quantification. We conclude that an interactive cross-disciplinary approach to research in this

  10. Translocations of amphibians: Proven management method or experimental technique

    USGS Publications Warehouse

    Seigel, Richard A.; Dodd, C. Kenneth

    2002-01-01

    In an otherwise excellent review of metapopulation dynamics in amphibians, Marsh and Trenham (2001) make the following provocative statements (emphasis added): If isolation effects occur primarily in highly disturbed habitats, species translocations may be necessary to promote local and regional population persistence. Because most amphibians lack parental care, they areprime candidates for egg and larval translocations. Indeed, translocations have already proven successful for several species of amphibians. Where populations are severely isolated, translocations into extinct subpopulations may be the best strategy to promote regional population persistence. We take issue with these statements for a number of reasons. First, the authors fail to cite much of the relevant literature on species translocations in general and for amphibians in particular. Second, to those unfamiliar with current research in amphibian conservation biology, these comments might suggest that translocations are a proven management method. This is not the case, at least in most instances where translocations have been evaluated for an appropriate period of time. Finally, the authors fail to point out some of the negative aspects of species translocation as a management method. We realize that Marsh and Trenham's paper was not concerned primarily with translocations. However, because Marsh and Trenham (2001) made specific recommendations for conservation planners and managers (many of whom are not herpetologists or may not be familiar with the pertinent literature on amphibians), we believe that it is essential to point out that not all amphibian biologists are as comfortable with translocations as these authors appear to be. We especially urge caution about advocating potentially unproven techniques without a thorough review of available options.

  11. Single-Subject Experimental Design for Evidence-Based Practice

    ERIC Educational Resources Information Center

    Byiers, Breanne J.; Reichle, Joe; Symons, Frank J.

    2012-01-01

    Purpose: Single-subject experimental designs (SSEDs) represent an important tool in the development and implementation of evidence-based practice in communication sciences and disorders. The purpose of this article is to review the strategies and tactics of SSEDs and their application in speech-language pathology research. Method: The authors…

  12. Applications of Chemiluminescence in the Teaching of Experimental Design

    ERIC Educational Resources Information Center

    Krawczyk, Tomasz; Slupska, Roksana; Baj, Stefan

    2015-01-01

    This work describes a single-session laboratory experiment devoted to teaching the principles of factorial experimental design. Students undertook the rational optimization of a luminol oxidation reaction, using a two-level experiment that aimed to create a long-lasting bright emission. During the session students used only simple glassware and…

  13. Bands to Books: Connecting Literature to Experimental Design

    ERIC Educational Resources Information Center

    Bintz, William P.; Moore, Sara Delano

    2004-01-01

    This article describes an interdisciplinary unit of study on the inquiry process and experimental design that seamlessly integrates math, science, and reading using a rubber band cannon. This unit was conducted over an eight-day period in two sixth-grade classes (one math and one science with each class consisting of approximately 27 students and…

  14. Experimental design for single point diamond turning of silicon optics

    SciTech Connect

    Krulewich, D.A.

    1996-06-16

    The goal of these experiments is to determine optimum cutting factors for the machining of silicon optics. This report describes experimental design, a systematic method of selecting optimal settings for a limited set of experiments, and its use in the silcon-optics turning experiments. 1 fig., 11 tabs.

  15. A Realistic Experimental Design and Statistical Analysis Project

    ERIC Educational Resources Information Center

    Muske, Kenneth R.; Myers, John A.

    2007-01-01

    A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…

  16. Evaluation of CFD Turbulent Heating Prediction Techniques and Comparison With Hypersonic Experimental Data

    NASA Technical Reports Server (NTRS)

    Dilley, Arthur D.; McClinton, Charles R. (Technical Monitor)

    2001-01-01

    Results from a study to assess the accuracy of turbulent heating and skin friction prediction techniques for hypersonic applications are presented. The study uses the original and a modified Baldwin-Lomax turbulence model with a space marching code. Grid converged turbulent predictions using the wall damping formulation (original model) and local damping formulation (modified model) are compared with experimental data for several flat plates. The wall damping and local damping results are similar for hot wall conditions, but differ significantly for cold walls, i.e., T(sub w) / T(sub t) < 0.3, with the wall damping heating and skin friction 10-30% above the local damping results. Furthermore, the local damping predictions have reasonable or good agreement with the experimental heating data for all cases. The impact of the two formulations on the van Driest damping function and the turbulent eddy viscosity distribution for a cold wall case indicate the importance of including temperature gradient effects. Grid requirements for accurate turbulent heating predictions are also studied. These results indicate that a cell Reynolds number of 1 is required for grid converged heating predictions, but coarser grids with a y(sup +) less than 2 are adequate for design of hypersonic vehicles. Based on the results of this study, it is recommended that the local damping formulation be used with the Baldwin-Lomax and Cebeci-Smith turbulence models in design and analysis of Hyper-X and future hypersonic vehicles.

  17. Experimental Design to Evaluate Directed Adaptive Mutation in Mammalian Cells

    PubMed Central

    Chiaro, Christopher R; May, Tobias

    2014-01-01

    Background We describe the experimental design for a methodological approach to determine whether directed adaptive mutation occurs in mammalian cells. Identification of directed adaptive mutation would have profound practical significance for a wide variety of biomedical problems, including disease development and resistance to treatment. In adaptive mutation, the genetic or epigenetic change is not random; instead, the presence and type of selection influences the frequency and character of the mutation event. Adaptive mutation can contribute to the evolution of microbial pathogenesis, cancer, and drug resistance, and may become a focus of novel therapeutic interventions. Objective Our experimental approach was designed to distinguish between 3 types of mutation: (1) random mutations that are independent of selective pressure, (2) undirected adaptive mutations that arise when selective pressure induces a general increase in the mutation rate, and (3) directed adaptive mutations that arise when selective pressure induces targeted mutations that specifically influence the adaptive response. The purpose of this report is to introduce an experimental design and describe limited pilot experiment data (not to describe a complete set of experiments); hence, it is an early report. Methods An experimental design based on immortalization of mouse embryonic fibroblast cells is presented that links clonal cell growth to reversal of an inactivating polyadenylation site mutation. Thus, cells exhibit growth only in the presence of both the countermutation and an inducing agent (doxycycline). The type and frequency of mutation in the presence or absence of doxycycline will be evaluated. Additional experimental approaches would determine whether the cells exhibit a generalized increase in mutation rate and/or whether the cells show altered expression of error-prone DNA polymerases or of mismatch repair proteins. Results We performed the initial stages of characterizing our system

  18. Taguchi method of experimental design in materials education

    NASA Technical Reports Server (NTRS)

    Weiser, Martin W.

    1993-01-01

    Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.

  19. Development of a complex experimental system for controlled ecological life support technique

    NASA Astrophysics Data System (ADS)

    Guo, S.; Tang, Y.; Zhu, J.; Wang, X.; Feng, H.; Ai, W.; Qin, L.; Deng, Y.

    A complex experimental system for controlled ecological life support technique can be used as a test platform for plant-man integrated experiments and material close-loop experiments of the controlled ecological life support system CELSS Based on lots of plan investigation plan design and drawing design the system was built through the steps of processing installation and joined debugging The system contains a volume of about 40 0m 3 its interior atmospheric parameters such as temperature relative humidity oxygen concentration carbon dioxide concentration total pressure lighting intensity photoperiod water content in the growing-matrix and ethylene concentration are all monitored and controlled automatically and effectively Its growing system consists of two rows of racks along its left-and-right sides separately and each of which holds two up-and-down layers eight growing beds hold a total area of about 8 4m 2 and their vertical distance can be adjusted automatically and independently lighting sources consist of both red and blue light-emitting diodes Successful development of the test platform will necessarily create an essential condition for next large-scale integrated study of controlled ecological life support technique

  20. Experimental techniques for evaluating steady-state jet engine performance in an altitude facility

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Young, C. Y.; Antl, R. J.

    1971-01-01

    Jet engine calibration tests were conducted in an altitude facility using a contoured bellmouth inlet duct, four fixed-area water-cooled exhaust nozzles, and an accurately calibrated thrust measuring system. Accurate determination of the airflow measuring station flow coefficient, the flow and thrust coefficients of the exhaust nozzles, and the experimental and theoretical terms in the nozzle gross thrust equation were some of the objectives of the tests. A primary objective was to develop a technique to determine gross thrust for the turbojet engine used in this test that could also be used for future engine and nozzle evaluation tests. The probable error in airflow measurement was found to be approximately 0.6 percent at the bellmouth throat design Mach number of 0.6. The probable error in nozzle gross thrust measurement was approximated 0.6 percent at the load cell full-scale reading.

  1. Computational design and experimental validation of new thermal barrier systems

    SciTech Connect

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr2.75O8 and confirmed it’s hot corrosion performance.

  2. Active flutter suppression - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1991-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind-tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in flutter dynamic pressure and flutter frequency in the mathematical model. The flutter suppression controller was also successfully operated in combination with a roll maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  3. New head gradient coil design and construction techniques

    PubMed Central

    Handler, William B; Harris, Chad T; Scholl, Timothy J; Parker, Dennis L; Goodrich, K Craig; Dalrymple, Brian; Van Sass, Frank; Chronik, Blaine A

    2013-01-01

    Purpose To design and build a head insert gradient coil to use in conjunction with body gradients for superior imaging. Materials and Methods The use of the Boundary Element Method to solve for a gradient coil wire pattern on an arbitrary surface has allowed us to incorporate engineering changes into the electromagnetic design of a gradient coil directly. Improved wire pattern design has been combined with robust manufacturing techniques and novel cooling methods. Results The finished coil had an efficiency of 0.15 mT/m/A in all three axes and allowed the imaging region to extend across the entire head and upper part of the neck. Conclusion The ability to adapt your electromagnetic design to necessary changes from an engineering perspective leads to superior coil performance. PMID:24123485

  4. Development of Experimental Setup of Metal Rapid Prototyping Machine using Selective Laser Sintering Technique

    NASA Astrophysics Data System (ADS)

    Patil, S. N.; Mulay, A. V.; Ahuja, B. B.

    2016-08-01

    Unlike in the traditional manufacturing processes, additive manufacturing as rapid prototyping, allows designers to produce parts that were previously considered too complex to make economically. The shift is taking place from plastic prototype to fully functional metallic parts by direct deposition of metallic powders as produced parts can be directly used for desired purpose. This work is directed towards the development of experimental setup of metal rapid prototyping machine using selective laser sintering and studies the various parameters, which plays important role in the metal rapid prototyping using SLS technique. The machine structure in mainly divided into three main categories namely, (1) Z-movement of bed and table, (2) X-Y movement arrangement for LASER movements and (3) feeder mechanism. Z-movement of bed is controlled by using lead screw, bevel gear pair and stepper motor, which will maintain the accuracy of layer thickness. X-Y movements are controlled using timing belt and stepper motors for precise movements of LASER source. Feeder mechanism is then developed to control uniformity of layer thickness metal powder. Simultaneously, the study is carried out for selection of material. Various types of metal powders can be used for metal RP as Single metal powder, mixture of two metals powder, and combination of metal and polymer powder. Conclusion leads to use of mixture of two metals powder to minimize the problems such as, balling effect and porosity. Developed System can be validated by conducting various experiments on manufactured part to check mechanical and metallurgical properties. After studying the results of these experiments, various process parameters as LASER properties (as power, speed etc.), and material properties (as grain size and structure etc.) will be optimized. This work is mainly focused on the design and development of cost effective experimental setup of metal rapid prototyping using SLS technique which will gives the feel of

  5. Design and experimental evaluation of compact radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Fredmonski, A. J.; Huber, F. W.; Roelke, R. J.; Simonyi, S.

    1991-01-01

    The application of a multistage 3D Euler solver to the aerodynamic design of two compact radial-inflow turbines is presented, along with experimental results evaluating and validating the designs. The objectives of the program were to design, fabricate, and rig test compact radial-inflow turbines with equal or better efficiency relative to conventional designs, while having 40 percent less rotor length than current traditionally-sized radial turbines. The approach to achieving these objectives was to apply a calibrated 3D multistage Euler code to accurately predict and control the high rotor flow passage velocities and high aerodynamic loadings resulting from the reduction in rotor length. A comparison of the advanced compact designs to current state-of-the-art configurations is presented.

  6. Analytical and experimental performance of optimal controller designs for a supersonic inlet

    NASA Technical Reports Server (NTRS)

    Zeller, J. R.; Lehtinen, B.; Geyser, L. C.; Batterton, P. G.

    1973-01-01

    The techniques of modern optimal control theory were applied to the design of a control system for a supersonic inlet. The inlet control problem was approached as a linear stochastic optimal control problem using as the performance index the expected frequency of unstarts. The details of the formulation of the stochastic inlet control problem are presented. The computational procedures required to obtain optimal controller designs are discussed, and the analytically predicted performance of controllers designed for several different inlet conditions is tabulated. The experimental implementation of the optimal control laws is described, and the experimental results obtained in a supersonic wind tunnel are presented. The control laws were implemented with analog and digital computers. Comparisons are made between the experimental and analytically predicted performance results. Comparisons are also made between the results obtained with continuous analog computer controllers and discrete digital computer versions.

  7. Distributed processing techniques: interface design for interactive information sharing.

    PubMed

    Wagner, J R; Krumbholz, S D; Silber, L K; Aniello, A J

    1978-01-01

    The Information Systems Division of the University of Iowa Hospitals and Clinics has successfully designed and implemented a set of generalized interface data-handling routines that control message traffic between a satellite minicomputer in a clinical laboratory and a large main-frame computer. A special queue status inquiry transaction has also been developed that displays the current message-processing backlog and other system performance information. The design and operation of these programs are discussed in detail, with special emphasis on the message-queuing and verification techniques required in a distributed processing environment.

  8. Design and construction techniques for permeable reactive barriers.

    PubMed

    Gavaskar, A R

    1999-08-12

    Adequate site characterization, bench-scale column testing, and hydrogeologic modeling formed the basis for the design and construction of permeable reactive barriers for groundwater remediation at various sites, such as Dover Air Force Base, DE and Naval Air Station, Moffett Field, CA. Dissolved chlorinated solvents, such as perchloroethylene (PCE) and trichloroethylene (TCE), have been the focus at many sites because the passive nature of the reactive barrier operation makes such barriers particularly useful for treating groundwater contaminants that can persist in the aquifer for several years. A combination of conventional and innovative site characterization, design, and construction techniques were used at these sites to increase the potential cost effectiveness of field application.

  9. Photographic-assisted prosthetic design technique for the anterior teeth.

    PubMed

    Zaccaria, Massimiliano; Squadrito, Nino

    2015-01-01

    The aim of this article is to propose a standardized protocol for treating all inesthetic anterior maxillary situations using a well-planned clinical and photographic technique. As inesthetic aspects should be treated as a pathology, instruments to make a diagnosis are necessary. The prosthetic design to resolve inesthetic aspects, in respect of the function, should be considered a therapy, and, as such, instruments to make a prognosis are necessary. A perspective study was conducted to compare the involvement of patients with regard to the alterations to be made, initially with only a graphic esthetic previsualization, and later with an intraoral functional and esthetic previsualization. Significantly different results were shown for the two techniques. The instruments and steps necessary for the intraoral functional and esthetic previsualization technique are explained in detail in this article.

  10. Design and experimental results for the S805 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    An airfoil for horizontal-axis wind-turbine applications, the S805, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  11. Design and experimental results for the S809 airfoil

    SciTech Connect

    Somers, D M

    1997-01-01

    A 21-percent-thick, laminar-flow airfoil, the S809, for horizontal-axis wind-turbine applications, has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of restrained maximum lift, insensitive to roughness, and low profile drag have been achieved. The airfoil also exhibits a docile stall. Comparisons of the theoretical and experimental results show good agreement. Comparisons with other airfoils illustrate the restrained maximum lift coefficient as well as the lower profile-drag coefficients, thus confirming the achievement of the primary objectives.

  12. Design and Implementation of an Experimental Segway Model

    NASA Astrophysics Data System (ADS)

    Younis, Wael; Abdelati, Mohammed

    2009-03-01

    The segway is the first transportation product to stand, balance, and move in the same way we do. It is a truly 21st-century idea. The aim of this research is to study the theory behind building segway vehicles based on the stabilization of an inverted pendulum. An experimental model has been designed and implemented through this study. The model has been tested for its balance by running a Proportional Derivative (PD) algorithm on a microprocessor chip. The model has been identified in order to serve as an educational experimental platform for segways.

  13. Unique considerations in the design and experimental evaluation of tailored wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Zischka, Peter J.; Fentress, Michael L.; Chang, Stephen

    1992-01-01

    Some of the unique considerations that are associated with the design and experimental evaluation of chordwise deformable wing structures are addressed. Since chordwise elastic camber deformations are desired and must be free to develop, traditional rib concepts and experimental methodology cannot be used. New rib design concepts are presented and discussed. An experimental methodology based upon the use of a flexible sling support and load application system has been created and utilized to evaluate a model box beam experimentally. Experimental data correlate extremely well with design analysis predictions based upon a beam model for the global properties of camber compliance and spanwise bending compliance. Local strain measurements exhibit trends in agreement with intuition and theory but depart slightly from theoretical perfection based upon beam-like behavior alone. It is conjectured that some additional refinement of experimental technique is needed to explain or eliminate these (minor) departures from asymmetric behavior of upper and lower box cover strains. Overall, a solid basis for the design of box structures based upon the bending method of elastic camber production has been confirmed by the experiments.

  14. A Course in Heterogeneous Catalysis: Principles, Practice, and Modern Experimental Techniques.

    ERIC Educational Resources Information Center

    Wolf, Eduardo E.

    1981-01-01

    Outlines a multidisciplinary course which comprises fundamental, practical, and experimental aspects of heterogeneous catalysis. The course structure is a combination of lectures and demonstrations dealing with the use of spectroscopic techniques for surface analysis. (SK)

  15. Designing the Balloon Experimental Twin Telescope for Infrared Interferometry

    NASA Technical Reports Server (NTRS)

    Rinehart, Stephen

    2011-01-01

    While infrared astronomy has revolutionized our understanding of galaxies, stars, and planets, further progress on major questions is stymied by the inescapable fact that the spatial resolution of single-aperture telescopes degrades at long wavelengths. The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter boom interferometer to operate in the FIR (30-90 micron) on a high altitude balloon. The long baseline will provide unprecedented angular resolution (approx. 5") in this band. In order for BETTII to be successful, the gondola must be designed carefully to provide a high level of stability with optics designed to send a collimated beam into the cryogenic instrument. We present results from the first 5 months of design effort for BETTII. Over this short period of time, we have made significant progress and are on track to complete the design of BETTII during this year.

  16. Optimal active vibration absorber: Design and experimental results

    NASA Technical Reports Server (NTRS)

    Lee-Glauser, Gina; Juang, Jer-Nan; Sulla, Jeffrey L.

    1992-01-01

    An optimal active vibration absorber can provide guaranteed closed-loop stability and control for large flexible space structures with collocated sensors/actuators. The active vibration absorber is a second-order dynamic system which is designed to suppress any unwanted structural vibration. This can be designed with minimum knowledge of the controlled system. Two methods for optimizing the active vibration absorber parameters are illustrated: minimum resonant amplitude and frequency matched active controllers. The Controls-Structures Interaction Phase-1 Evolutionary Model at NASA LaRC is used to demonstrate the effectiveness of the active vibration absorber for vibration suppression. Performance is compared numerically and experimentally using acceleration feedback.

  17. Experimental design principles for isotopically instationary 13C labeling experiments.

    PubMed

    Nöh, Katharina; Wiechert, Wolfgang

    2006-06-01

    13C metabolic flux analysis (MFA) is a well-established tool in Metabolic Engineering that found numerous applications in recent years. However, one strong limitation of the current method is the requirement of an-at least approximate-isotopic stationary state at sampling time. This requirement leads to a principle lower limit for the duration of a 13C labeling experiment. A new methodological development is based on repeated sampling during the instationary transient of the 13C labeling dynamics. The statistical and computational treatment of such instationary experiments is a completely new terrain. The computational effort is very high because large differential equations have to be solved and, moreover, the intracellular pool sizes play a significant role. For this reason, the present contribution works out principles and strategies for the experimental design of instationary experiments based on a simple example network. Hereby, the potential of isotopically instationary experiments is investigated in detail. Various statistical results on instationary flux identifiability are presented and possible pitfalls of experimental design are discussed. Finally, a framework for almost optimal experimental design of isotopically instationary experiments is proposed which provides a practical guideline for the analysis of large-scale networks.

  18. Development of the Biological Experimental Design Concept Inventory (BEDCI)

    PubMed Central

    Deane, Thomas; Jeffery, Erica; Pollock, Carol; Birol, Gülnur

    2014-01-01

    Interest in student conception of experimentation inspired the development of a fully validated 14-question inventory on experimental design in biology (BEDCI) by following established best practices in concept inventory (CI) design. This CI can be used to diagnose specific examples of non–expert-like thinking in students and to evaluate the success of teaching strategies that target conceptual changes. We used BEDCI to diagnose non–expert-like student thinking in experimental design at the pre- and posttest stage in five courses (total n = 580 students) at a large research university in western Canada. Calculated difficulty and discrimination metrics indicated that BEDCI questions are able to effectively capture learning changes at the undergraduate level. A high correlation (r = 0.84) between responses by students in similar courses and at the same stage of their academic career, also suggests that the test is reliable. Students showed significant positive learning changes by the posttest stage, but some non–expert-like responses were widespread and persistent. BEDCI is a reliable and valid diagnostic tool that can be used in a variety of life sciences disciplines. PMID:25185236

  19. Quiet Clean Short-Haul Experimental Engine (QCSEE). Preliminary analyses and design report, volume 2

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental and flight propulsion systems are presented. The following areas are discussed: engine core and low pressure turbine design; bearings and seals design; controls and accessories design; nacelle aerodynamic design; nacelle mechanical design; weight; and aircraft systems design.

  20. Experimental design and desirability function approach for development of novel anticancer nanocarrier delivery systems.

    PubMed

    Rafati, H; Mirzajani, F

    2011-01-01

    The therapeutic effects of anticancer drugs would highly improve if problems with low water solubility and toxic adverse reactions could be solved. In this work, a full factorial experimental design was used to develop a polymeric nanoparticulate delivery system as an alternative technique for anticancer drug delivery. Nanoparticles containing tamoxifen citrate were prepared and characterized using an O/W emulsification-solvent evaporation technique and different analytical methods. Scanning Electron Microscopy (SEM), particle size analysis and High Pressure Liquid Chromatography (HPLC) were used for characterization of nanoparticles. Nanoparticles' characteristics including size, size distribution, drug loading and the efficiency of encapsulation were optimized by means of a full factorial experimental design over the influence of four different independent variables and desirability function using Design-Expert software. The resulting tamoxifen loaded nanoparticles showed the best response with particle sizes less than 200 nm, improved encapsulation efficiency of more than 80% and the optimum loading of above 30%. The overall results demonstrate the implication of desirability functionin experimental design as a beneficial approach in nanoparticle drug delivery design. PMID:21391432

  1. Techniques for Conducting Effective Concept Design and Design-to-Cost Trade Studies

    NASA Technical Reports Server (NTRS)

    Di Pietro, David A.

    2015-01-01

    Concept design plays a central role in project success as its product effectively locks the majority of system life cycle cost. Such extraordinary leverage presents a business case for conducting concept design in a credible fashion, particularly for first-of-a-kind systems that advance the state of the art and that have high design uncertainty. A key challenge, however, is to know when credible design convergence has been achieved in such systems. Using a space system example, this paper characterizes the level of convergence needed for concept design in the context of technical and programmatic resource margins available in preliminary design and highlights the importance of design and cost evaluation learning curves in determining credible convergence. It also provides techniques for selecting trade study cases that promote objective concept evaluation, help reveal unknowns, and expedite convergence within the trade space and conveys general practices for conducting effective concept design-to-cost studies.

  2. Design and experimental study of a novel giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Dongwei; Huang, Yingjie; Xie, Wenqiang

    2016-12-01

    Giant magnetostrictive actuator has been widely used in precise driving occasions for its excellent performance. However, in driving a switching valve, especially the ball-valve in an electronic controlled injector, the actuator can't exhibit its good performance for limits in output displacement and responding speed. A novel giant magnetostrictive actuator, which can reach its maximum displacement for being exerted with no bias magnetic field, is designed in this paper. Simultaneously, elongating of the giant magetostrictive material is converted to shortening of the actuator's axial dimension with the help of an output rod in "T" type. Furthermore, to save responding time, the driving voltage with high opening voltage while low holding voltage is designed. Responding time and output displacement are studied experimentally with the help of a measuring system. From measured results, designed driving voltage can improve the responding speed of actuator displacement quite effectively. And, giant magnetostrictive actuator can output various steady-state displacements to reach more driving effects.

  3. LeRC rail accelerators - Test designs and diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Zana, L. M.; Kerslake, W. R.; Sturman, J. C.; Wang, S. Y.; Terdan, F. F.

    1984-01-01

    The feasibility of using rail accelerators for various in-space and to-space propulsion applications was investigated. A 1 meter, 24 sq mm bore accelerator was designed with the goal of demonstrating projectile velocities of 15 km/sec using a peak current of 200 kA. A second rail accelerator, 1 meter long with a 156.25 sq mm bore, was designed with clear polycarbonate sidewalls to permit visual observation of the plasma arc. A study of available diagnostic techniques and their application to the rail accelerator is presented. Specific topics of discussion include the use of interferometry and spectroscopy to examine the plasma armature as well as the use of optical sensors to measure rail displacement during acceleration. Standard diagnostics such as current and voltage measurements are also discussed. Previously announced in STAR as N83-35053

  4. Design considerations and construction techniques for successive alkalinity producing systems

    SciTech Connect

    Skovran, G.A.; Clouser, C.R.

    1998-12-31

    Successive Alkalinity Producing Systems (SAPS) have been utilized for several years for the passive treatment of acid mine drainage. The SAPS technology is an effective method for inducing alkalinity to neutralize acid mine water and promote the precipitation of contaminating metals. Several design considerations and construction techniques are important for proper system function and longevity. This paper discusses SAPS design, water collection and introduction to the SAPS, hydraulics of SAPS, construction, operation and maintenance, and safety, and found that these factors were critical to obtaining maximum alkalinity at several SAPS treatment sites in Southwestern Pennsylvania. Taking care to incorporate these factors into future SAPS will aid effective treatment, reduce maintenance costs, and maximize long term effectiveness of successive alkalinity producing systems.

  5. Amplified energy harvester from footsteps: design, modeling, and experimental analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ya; Chen, Wusi; Guzman, Plinio; Zuo, Lei

    2014-04-01

    This paper presents the design, modeling and experimental analysis of an amplified footstep energy harvester. With the unique design of amplified piezoelectric stack harvester the kinetic energy generated by footsteps can be effectively captured and converted into usable DC power that could potentially be used to power many electric devices, such as smart phones, sensors, monitoring cameras, etc. This doormat-like energy harvester can be used in crowded places such as train stations, malls, concerts, airport escalator/elevator/stairs entrances, or anywhere large group of people walk. The harvested energy provides an alternative renewable green power to replace power requirement from grids, which run on highly polluting and global-warming-inducing fossil fuels. In this paper, two modeling approaches are compared to calculate power output. The first method is derived from the single degree of freedom (SDOF) constitutive equations, and then a correction factor is applied onto the resulting electromechanically coupled equations of motion. The second approach is to derive the coupled equations of motion with Hamilton's principle and the constitutive equations, and then formulate it with the finite element method (FEM). Experimental testing results are presented to validate modeling approaches. Simulation results from both approaches agree very well with experimental results where percentage errors are 2.09% for FEM and 4.31% for SDOF.

  6. Structural design and fabrication techniques of composite unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Hunt, Daniel Stephen

    Popularity of unmanned aerial vehicles has grown substantially in recent years both in the private sector, as well as for government functions. This growth can be attributed largely to the increased performance of the technology that controls these vehicles, as well as decreasing cost and size of this technology. What is sometimes forgotten though, is that the research and advancement of the airframes themselves are equally as important as what is done with them. With current computer-aided design programs, the limits of design optimization can be pushed further than ever before, resulting in lighter and faster airframes that can achieve longer endurances, higher altitudes, and more complex missions. However, realization of a paper design is still limited by the physical restrictions of the real world and the structural constraints associated with it. The purpose of this paper is to not only step through current design and manufacturing processes of composite UAVs at Oklahoma State University, but to also focus on composite spars, utilizing and relating both calculated and empirical data. Most of the experience gained for this thesis was from the Cessna Longitude project. The Longitude is a 1/8 scale, flying demonstrator Oklahoma State University constructed for Cessna. For the project, Cessna required dynamic flight data for their design process in order to make their 2017 release date. Oklahoma State University was privileged enough to assist Cessna with the mission of supporting the validation of design of their largest business jet to date. This paper will detail the steps of the fabrication process used in construction of the Longitude, as well as several other projects, beginning with structural design, machining, molding, skin layup, and ending with final assembly. Also, attention will be paid specifically towards spar design and testing in effort to ease the design phase. This document is intended to act not only as a further development of current

  7. Design and experimental results for the S814 airfoil

    SciTech Connect

    Somers, D.M.

    1997-01-01

    A 24-percent-thick airfoil, the S814, for the root region of a horizontal-axis wind-turbine blade has been designed and analyzed theoretically and verified experimentally in the low-turbulence wind tunnel of the Delft University of Technology Low Speed Laboratory, The Netherlands. The two primary objectives of high maximum lift, insensitive to roughness, and low profile drag have been achieved. The constraints on the pitching moment and the airfoil thickness have been satisfied. Comparisons of the theoretical and experimental results show good agreement with the exception of maximum lift which is overpredicted. Comparisons with other airfoils illustrate the higher maximum lift and the lower profile drag of the S814 airfoil, thus confirming the achievement of the objectives.

  8. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2012-10-01

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DEFOA- 0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed Durability Test Rig.

  9. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2014-04-01

    This project (10/01/2010-9/30/2014), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This project will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. In this project, the focus is to develop and implement novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; perform material characterizations and oxidation/corrosion tests; and demonstrate our new thermal barrier coating (TBC) systems experimentally under integrated gasification combined cycle (IGCC) environments.

  10. Technological issues and experimental design of gene association studies.

    PubMed

    Distefano, Johanna K; Taverna, Darin M

    2011-01-01

    Genome-wide association studies (GWAS), in which thousands of single-nucleotide polymorphisms (SNPs) spanning the genome are genotyped in individuals who are phenotypically well characterized, -currently represent the most popular strategy for identifying gene regions associated with common -diseases and related quantitative traits. Improvements in technology and throughput capability, development of powerful statistical tools, and more widespread acceptance of pooling-based genotyping approaches have led to greater utilization of GWAS in human genetics research. However, important considerations for optimal experimental design, including selection of the most appropriate genotyping platform, can enhance the utility of the approach even further. This chapter reviews experimental and technological issues that may affect the success of GWAS findings and proposes strategies for developing the most comprehensive, logical, and cost-effective approaches for genotyping given the population of interest.

  11. Acting like a physicist: Student approach study to experimental design

    NASA Astrophysics Data System (ADS)

    Karelina, Anna; Etkina, Eugenia

    2007-12-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In these labs [called Investigative Science Learning Environment (ISLE) labs], students design their own experiments. Our previous studies have shown that students in these labs acquire scientific abilities such as the ability to design an experiment to solve a problem, the ability to collect and analyze data, the ability to evaluate assumptions and uncertainties, and the ability to communicate. These studies mostly concentrated on analyzing students’ writing, evaluated by specially designed scientific ability rubrics. Recently, we started to study whether the ISLE labs make students not only write like scientists but also engage in discussions and act like scientists while doing the labs. For example, do students plan an experiment, validate assumptions, evaluate results, and revise the experiment if necessary? A brief report of some of our findings that came from monitoring students’ activity during ISLE and nondesign labs was presented in the Physics Education Research Conference Proceedings. We found differences in student behavior and discussions that indicated that ISLE labs do in fact encourage a scientistlike approach to experimental design and promote high-quality discussions. This paper presents a full description of the study.

  12. OPTIMIZATION OF EXPERIMENTAL DESIGNS BY INCORPORATING NIF FACILITY IMPACTS

    SciTech Connect

    Eder, D C; Whitman, P K; Koniges, A E; Anderson, R W; Wang, P; Gunney, B T; Parham, T G; Koerner, J G; Dixit, S N; . Suratwala, T I; Blue, B E; Hansen, J F; Tobin, M T; Robey, H F; Spaeth, M L; MacGowan, B J

    2005-08-31

    For experimental campaigns on the National Ignition Facility (NIF) to be successful, they must obtain useful data without causing unacceptable impact on the facility. Of particular concern is excessive damage to optics and diagnostic components. There are 192 fused silica main debris shields (MDS) exposed to the potentially hostile target chamber environment on each shot. Damage in these optics results either from the interaction of laser light with contamination and pre-existing imperfections on the optic surface or from the impact of shrapnel fragments. Mitigation of this second damage source is possible by identifying shrapnel sources and shielding optics from them. It was recently demonstrated that the addition of 1.1-mm thick borosilicate disposable debris shields (DDS) block the majority of debris and shrapnel fragments from reaching the relatively expensive MDS's. However, DDS's cannot stop large, faster moving fragments. We have experimentally demonstrated one shrapnel mitigation technique showing that it is possible to direct fast moving fragments by changing the source orientation, in this case a Ta pinhole array. Another mitigation method is to change the source material to one that produces smaller fragments. Simulations and validating experiments are necessary to determine which fragments can penetrate or break 1-3 mm thick DDS's. Three-dimensional modeling of complex target-diagnostic configurations is necessary to predict the size, velocity, and spatial distribution of shrapnel fragments. The tools we are developing will be used to set the allowed level of debris and shrapnel generation for all NIF experimental campaigns.

  13. Development of prilling process for biodegradable microspheres through experimental designs.

    PubMed

    Fabien, Violet; Minh-Quan, Le; Michelle, Sergent; Guillaume, Bastiat; Van-Thanh, Tran; Marie-Claire, Venier-Julienne

    2016-02-10

    The prilling process proposes a microparticle formulation easily transferable to the pharmaceutical production, leading to monodispersed and highly controllable microspheres. PLGA microspheres were used for carrying an encapsulated protein and adhered stem cells on its surface, proposing a tool for regeneration therapy against injured tissue. This work focused on the development of the production of PLGA microspheres by the prilling process without toxic solvent. The required production quality needed a complete optimization of the process. Seventeen parameters were studied through experimental designs and led to an acceptable production. The key parameters and mechanisms of formation were highlighted. PMID:26656302

  14. Designing artificial enzymes from scratch: Experimental study and mesoscale simulation

    NASA Astrophysics Data System (ADS)

    Komarov, Pavel V.; Zaborina, Olga E.; Klimova, Tamara P.; Lozinsky, Vladimir I.; Khalatur, Pavel G.; Khokhlov, Alexey R.

    2016-09-01

    We present a new concept for designing biomimetic analogs of enzymatic proteins; these analogs are based on the synthetic protein-like copolymers. α-Chymotrypsin is used as a prototype of the artificial catalyst. Our experimental study shows that in the course of free radical copolymerization of hydrophobic and hydrophilic monomers the target globular nanostructures of a "core-shell" morphology appear in a selective solvent. Using a mesoscale computer simulation, we show that the protein-like globules can have a large number of catalytic centers located at the hydrophobic core/hydrophilic shell interface.

  15. An improved retinal densitometer: design concepts and experimental applications.

    PubMed

    Baker, H D; Henderson, R; O'Keefe, L P

    1989-07-01

    A photon-counting retinal densitometer is described that has been designed optically and electronically for improved sensitivity and reliability. The device allows measurement of visual pigments through the undilated natural pupils of subjects at relatively low levels of measuring lights, and serves also as an adaptometer for direct comparisons between pigment bleaching or regeneration and light or dark adaptation. Instrumental control and data collection are by computer to permit rapid and simple data analysis and comparisons between subjects. The methods by which the sensitivity and reliability have been enhanced are described in detail, and some examples of experimental results are presented.

  16. On the proper study design applicable to experimental balneology

    NASA Astrophysics Data System (ADS)

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  17. On the proper study design applicable to experimental balneology.

    PubMed

    Varga, Csaba

    2016-08-01

    The simple message of this paper is that it is the high time to reevaluate the strategies and optimize the efforts for investigation of thermal (spa) waters. Several articles trying to clear mode of action of medicinal waters have been published up to now. Almost all studies apply the unproven hypothesis, namely the inorganic ingredients are in close connection with healing effects of bathing. Change of paradigm would be highly necessary in this field taking into consideration the presence of several biologically active organic substances in these waters. A successful design for experimental mechanistic studies is approved.

  18. Design of vibration compensation interferometer for Experimental Advanced Superconducting Tokamak.

    PubMed

    Yang, Y; Li, G S; Liu, H Q; Jie, Y X; Ding, W X; Brower, D L; Zhu, X; Wang, Z X; Zeng, L; Zou, Z Y; Wei, X C; Lan, T

    2014-11-01

    A vibration compensation interferometer (wavelength at 0.532 μm) has been designed and tested for Experimental Advanced Superconducting Tokamak (EAST). It is designed as a sub-system for EAST far-infrared (wavelength at 432.5 μm) poloarimeter/interferometer system. Two Acoustic Optical Modulators have been applied to produce the 1 MHz intermediate frequency. The path length drift of the system is lower than 2 wavelengths within 10 min test, showing the system stability. The system sensitivity has been tested by applying a periodic vibration source on one mirror in the system. The vibration is measured and the result matches the source period. The system is expected to be installed on EAST by the end of 2014.

  19. Experimental design and quality assurance: in situ fluorescence instrumentation

    USGS Publications Warehouse

    Conmy, Robyn N.; Del Castillo, Carlos E.; Downing, Bryan D.; Chen, Robert F.

    2014-01-01

    Both instrument design and capabilities of fluorescence spectroscopy have greatly advanced over the last several decades. Advancements include solid-state excitation sources, integration of fiber optic technology, highly sensitive multichannel detectors, rapid-scan monochromators, sensitive spectral correction techniques, and improve data manipulation software (Christian et al., 1981, Lochmuller and Saavedra, 1986; Cabniss and Shuman, 1987; Lakowicz, 2006; Hudson et al., 2007). The cumulative effect of these improvements have pushed the limits and expanded the application of fluorescence techniques to numerous scientific research fields. One of the more powerful advancements is the ability to obtain in situ fluorescence measurements of natural waters (Moore, 1994). The development of submersible fluorescence instruments has been made possible by component miniaturization and power reduction including advances in light sources technologies (light-emitting diodes, xenon lamps, ultraviolet [UV] lasers) and the compatible integration of new optical instruments with various sampling platforms (Twardowski et at., 2005 and references therein). The development of robust field sensors skirt the need for cumbersome and or time-consuming filtration techniques, the potential artifacts associated with sample storage, and coarse sampling designs by increasing spatiotemporal resolution (Chen, 1999; Robinson and Glenn, 1999). The ability to obtain rapid, high-quality, highly sensitive measurements over steep gradients has revolutionized investigations of dissolved organic matter (DOM) optical properties, thereby enabling researchers to address novel biogeochemical questions regarding colored or chromophoric DOM (CDOM). This chapter is dedicated to the origin, design, calibration, and use of in situ field fluorometers. It will serve as a review of considerations to be accounted for during the operation of fluorescence field sensors and call attention to areas of concern when making

  20. An Experimental Evaluation of the Effectiveness of Selected Techniques and Resources on Instruction in Vocational Agriculture.

    ERIC Educational Resources Information Center

    Kahler, Alan A.

    The study was designed to test new instructional techniques in vocational agriculture, determine their effectiveness on student achievement, and compare individual and group instructional techniques. Forty-eight randomly selected Iowa high school vocational agriculture programs with enrollments of 35 students or more, were selected for testing the…

  1. Silicone Rubber Superstrate Loaded Patch Antenna Design Using Slotting Technique

    NASA Astrophysics Data System (ADS)

    Kaur, Bhupinder; Saini, Garima; Saini, Ashish

    2016-09-01

    For the protection of antenna from external environmental conditions, there is a need that antenna should be covered with a stable, non-reactive, highly durable and weather resistive material which is insensitive to changing external environment. Hence, in this paper silicone rubber is proposed as a superstrate layer for patch antenna for its protection. The electrical properties of silicon rubber sealant are experimentally found out and its effect of using as superstrate on coaxial fed microstrip patch antenna using transmission line model is observed. The overall performance is degraded by slightly after the use of superstrate. Further to improve the performance of superstrate loaded antenna, patch slots and ground defects have been proposed. The proposed design achieves the wideband of 790 MHz (13.59 %), gain of 7.12 dB, VSWR of 1.12 and efficiency of 83.02 %.

  2. Experimental study of elementary collection efficiency of aerosols by spray: Design of the experimental device

    SciTech Connect

    Ducret, D.; Vendel, J.; Garrec. S.L.

    1995-02-01

    The safety of a nuclear power plant containment building, in which pressure and temperature could increase because of a overheating reactor accident, can be achieved by spraying water drops. The spray reduces the pressure and the temperature levels by condensation of steam on cold water drops. The more stringent thermodynamic conditions are a pressure of 5.10{sup 5} Pa (due to steam emission) and a temperature of 413 K. Moreover its energy dissipation function, the spray leads to the washout of fission product particles emitted in the reactor building atmosphere. The present study includes a large program devoted to the evaluation of realistic washout rates. The aim of this work is to develop experiments in order to determine the collection efficiency of aerosols by a single drop. To do this, the experimental device has to be designed with fundamental criteria:-Thermodynamic conditions have to be representative of post-accident atmosphere. Thermodynamic equilibrium has to be attained between the water drops and the gaseous phase. Thermophoretic, diffusiophoretic and mechanical effects have to be studied independently. Operating conditions have to be homogenous and constant during each experiment. This paper presents the design of the experimental device. In practice, the consequences on the design of each of the criteria given previously and the necessity of being representative of the real conditions will be described.

  3. Experimental Vertical Stability Studies for ITER Performance and Design Guidance

    SciTech Connect

    Humphreys, D A; Casper, T A; Eidietis, N; Ferrera, M; Gates, D A; Hutchinson, I H; Jackson, G L; Kolemen, E; Leuer, J A; Lister, J; LoDestro, L L; Meyer, W H; Pearlstein, L D; Sartori, F; Walker, M L; Welander, A S; Wolfe, S M

    2008-10-13

    Operating experimental devices have provided key inputs to the design process for ITER axisymmetric control. In particular, experiments have quantified controllability and robustness requirements in the presence of realistic noise and disturbance environments, which are difficult or impossible to characterize with modeling and simulation alone. This kind of information is particularly critical for ITER vertical control, which poses some of the highest demands on poloidal field system performance, since the consequences of loss of vertical control can be very severe. The present work describes results of multi-machine studies performed under a joint ITPA experiment on fundamental vertical control performance and controllability limits. We present experimental results from Alcator C-Mod, DIII-D, NSTX, TCV, and JET, along with analysis of these data to provide vertical control performance guidance to ITER. Useful metrics to quantify this control performance include the stability margin and maximum controllable vertical displacement. Theoretical analysis of the maximum controllable vertical displacement suggests effective approaches to improving performance in terms of this metric, with implications for ITER design modifications. Typical levels of noise in the vertical position measurement which can challenge the vertical control loop are assessed and analyzed.

  4. Prediction uncertainty and optimal experimental design for learning dynamical systems

    NASA Astrophysics Data System (ADS)

    Letham, Benjamin; Letham, Portia A.; Rudin, Cynthia; Browne, Edward P.

    2016-06-01

    Dynamical systems are frequently used to model biological systems. When these models are fit to data, it is necessary to ascertain the uncertainty in the model fit. Here, we present prediction deviation, a metric of uncertainty that determines the extent to which observed data have constrained the model's predictions. This is accomplished by solving an optimization problem that searches for a pair of models that each provides a good fit for the observed data, yet has maximally different predictions. We develop a method for estimating a priori the impact that additional experiments would have on the prediction deviation, allowing the experimenter to design a set of experiments that would most reduce uncertainty. We use prediction deviation to assess uncertainty in a model of interferon-alpha inhibition of viral infection, and to select a sequence of experiments that reduces this uncertainty. Finally, we prove a theoretical result which shows that prediction deviation provides bounds on the trajectories of the underlying true model. These results show that prediction deviation is a meaningful metric of uncertainty that can be used for optimal experimental design.

  5. Experimental Design for the INL Sample Collection Operational Test

    SciTech Connect

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  6. Experimental design in phylogenetics: testing predictions from expected information.

    PubMed

    San Mauro, Diego; Gower, David J; Cotton, James A; Zardoya, Rafael; Wilkinson, Mark; Massingham, Tim

    2012-07-01

    Taxon and character sampling are central to phylogenetic experimental design; yet, we lack general rules. Goldman introduced a method to construct efficient sampling designs in phylogenetics, based on the calculation of expected Fisher information given a probabilistic model of sequence evolution. The considerable potential of this approach remains largely unexplored. In an earlier study, we applied Goldman's method to a problem in the phylogenetics of caecilian amphibians and made an a priori evaluation and testable predictions of which taxon additions would increase information about a particular weakly supported branch of the caecilian phylogeny by the greatest amount. We have now gathered mitogenomic and rag1 sequences (some newly determined for this study) from additional caecilian species and studied how information (both expected and observed) and bootstrap support vary as each new taxon is individually added to our previous data set. This provides the first empirical test of specific predictions made using Goldman's method for phylogenetic experimental design. Our results empirically validate the top 3 (more intuitive) taxon addition predictions made in our previous study, but only information results validate unambiguously the 4th (less intuitive) prediction. This highlights a complex relationship between information and support, reflecting that each measures different things: Information is related to the ability to estimate branch length accurately and support to the ability to estimate the tree topology accurately. Thus, an increase in information may be correlated with but does not necessitate an increase in support. Our results also provide the first empirical validation of the widely held intuition that additional taxa that join the tree proximal to poorly supported internal branches are more informative and enhance support more than additional taxa that join the tree more distally. Our work supports the view that adding more data for a single (well

  7. Interplanetary mission design techniques for flagship-class missions

    NASA Astrophysics Data System (ADS)

    Kloster, Kevin W.

    Trajectory design, given the current level of propulsive technology, requires knowledge of orbital mechanics, computational resources, extensive use of tools such as gravity-assist and V infinity leveraging, as well as insight and finesse. Designing missions that deliver a capable science package to a celestial body of interest that are robust and affordable is a difficult task. Techniques are presented here that assist the mission designer in constructing trajectories for flagship-class missions in the outer Solar System. These techniques are applied in this work to spacecraft that are currently in flight or in the planning stages. By escaping the Saturnian system, the Cassini spacecraft can reach other destinations in the Solar System while satisfying planetary quarantine. The patched-conic method was used to search for trajectories that depart Saturn via gravity assist at Titan. Trajectories were found that fly by Jupiter to reach Uranus or Neptune, capture at Jupiter or Neptune, escape the Solar System, fly by Uranus during its 2049 equinox, or encounter Centaurs. A "grand tour," which visits Jupiter, Uranus, and Neptune, departs Saturn in 2014. New tools were built to search for encounters with Centaurs, small Solar System bodies between the orbits of Jupiter and Neptune, and to minimize the DeltaV to target these encounters. Cassini could reach Chiron, the first-discovered Centaur, in 10.5 years after a 2022 Saturn departure. For a Europa Orbiter mission, the strategy for designing Jovian System tours that include Io flybys differs significantly from schemes developed for previous versions of the mission. Assuming that the closest approach distance of the incoming hyperbola at Jupiter is below the orbit of Io, then an Io gravity assist gives the greatest energy pump-down for the least decrease in perijove radius. Using Io to help capture the spacecraft can increase the savings in Jupiter orbit insertion DeltaV over a Ganymede-aided capture. The tour design is

  8. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    PubMed Central

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  9. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  10. Design, experimentation, and modeling of a novel continuous biodrying process

    NASA Astrophysics Data System (ADS)

    Navaee-Ardeh, Shahram

    Massive production of sludge in the pulp and paper industry has made the effective sludge management increasingly a critical issue for the industry due to high landfill and transportation costs, and complex regulatory frameworks for options such as sludge landspreading and composting. Sludge dewatering challenges are exacerbated at many mills due to improved in-plant fiber recovery coupled with increased production of secondary sludge, leading to a mixed sludge with a high proportion of biological matter which is difficult to dewater. In this thesis, a novel continuous biodrying reactor was designed and developed for drying pulp and paper mixed sludge to economic dry solids level so that the dried sludge can be economically and safely combusted in a biomass boiler for energy recovery. In all experimental runs the economic dry solids level was achieved, proving the process successful. In the biodrying process, in addition to the forced aeration, the drying rates are enhanced by biological heat generated through the microbial activity of mesophilic and thermophilic microorganisms naturally present in the porous matrix of mixed sludge. This makes the biodrying process more attractive compared to the conventional drying techniques because the reactor is a self-heating process. The reactor is divided into four nominal compartments and the mixed sludge dries as it moves downward in the reactor. The residence times were 4-8 days, which are 2-3 times shorter than the residence times achieved in a batch biodrying reactor previously studied by our research group for mixed sludge drying. A process variable analysis was performed to determine the key variable(s) in the continuous biodrying reactor. Several variables were investigated, namely: type of biomass feed, pH of biomass, nutrition level (C/N ratio), residence times, recycle ratio of biodried sludge, and outlet relative humidity profile along the reactor height. The key variables that were identified in the continuous

  11. Comparing simulated emission from molecular clouds using experimental design

    SciTech Connect

    Yeremi, Miayan; Flynn, Mallory; Loeppky, Jason; Rosolowsky, Erik; Offner, Stella

    2014-03-10

    We propose a new approach to comparing simulated observations that enables us to determine the significance of the underlying physical effects. We utilize the methodology of experimental design, a subfield of statistical analysis, to establish a framework for comparing simulated position-position-velocity data cubes to each other. We propose three similarity metrics based on methods described in the literature: principal component analysis, the spectral correlation function, and the Cramer multi-variate two-sample similarity statistic. Using these metrics, we intercompare a suite of mock observational data of molecular clouds generated from magnetohydrodynamic simulations with varying physical conditions. Using this framework, we show that all three metrics are sensitive to changing Mach number and temperature in the simulation sets, but cannot detect changes in magnetic field strength and initial velocity spectrum. We highlight the shortcomings of one-factor-at-a-time designs commonly used in astrophysics and propose fractional factorial designs as a means to rigorously examine the effects of changing physical properties while minimizing the investment of computational resources.

  12. Design and construction of an experimental pervious paved parking area to harvest reusable rainwater.

    PubMed

    Gomez-Ullate, E; Novo, A V; Bayon, J R; Hernandez, Jorge R; Castro-Fresno, Daniel

    2011-01-01

    Pervious pavements are sustainable urban drainage systems already known as rainwater infiltration techniques which reduce runoff formation and diffuse pollution in cities. The present research is focused on the design and construction of an experimental parking area, composed of 45 pervious pavement parking bays. Every pervious pavement was experimentally designed to store rainwater and measure the levels of the stored water and its quality over time. Six different pervious surfaces are combined with four different geotextiles in order to test which materials respond better to the good quality of rainwater storage over time and under the specific weather conditions of the north of Spain. The aim of this research was to obtain a good performance of pervious pavements that offered simultaneously a positive urban service and helped to harvest rainwater with a good quality to be used for non potable demands.

  13. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  14. The Use of Techniques of Sensory Evaluation as a Framework for Teaching Experimental Methods.

    ERIC Educational Resources Information Center

    Bennett, R.; Hamilton, M.

    1981-01-01

    Describes sensory assessment techniques and conditions for their satisfactory performance, including how they can provide open-ended exercises and advantages as relatively inexpensive and simple methods of teaching experimentation. Experiments described focus on diffusion of salt into potatoes after being cooked in boiled salted water. (Author/JN)

  15. Recent Progress in x3-Related Optical Process Experimental Technique. Raman Lasing

    NASA Technical Reports Server (NTRS)

    Matsko, A. B.; Savchenkov, Anatoliy A.; Strekalov, Dmitry; Maleki, Lute

    2006-01-01

    We describe theoretically and verify experimentally a simple technique for analyzing conversion efficiency and threshold of ail-resonant intracavity Raman lasers. The method is based on a dependence of the ring-down time of the pump cavity mode on the energy, accumulated in the cavity.

  16. Multidisciplinary Design Techniques Applied to Conceptual Aerospace Vehicle Design. Ph.D. Thesis Final Technical Report

    NASA Technical Reports Server (NTRS)

    Olds, John Robert; Walberg, Gerald D.

    1993-01-01

    Multidisciplinary design optimization (MDO) is an emerging discipline within aerospace engineering. Its goal is to bring structure and efficiency to the complex design process associated with advanced aerospace launch vehicles. Aerospace vehicles generally require input from a variety of traditional aerospace disciplines - aerodynamics, structures, performance, etc. As such, traditional optimization methods cannot always be applied. Several multidisciplinary techniques and methods were proposed as potentially applicable to this class of design problem. Among the candidate options are calculus-based (or gradient-based) optimization schemes and parametric schemes based on design of experiments theory. A brief overview of several applicable multidisciplinary design optimization methods is included. Methods from the calculus-based class and the parametric class are reviewed, but the research application reported focuses on methods from the parametric class. A vehicle of current interest was chosen as a test application for this research. The rocket-based combined-cycle (RBCC) single-stage-to-orbit (SSTO) launch vehicle combines elements of rocket and airbreathing propulsion in an attempt to produce an attractive option for launching medium sized payloads into low earth orbit. The RBCC SSTO presents a particularly difficult problem for traditional one-variable-at-a-time optimization methods because of the lack of an adequate experience base and the highly coupled nature of the design variables. MDO, however, with it's structured approach to design, is well suited to this problem. The result of the application of Taguchi methods, central composite designs, and response surface methods to the design optimization of the RBCC SSTO are presented. Attention is given to the aspect of Taguchi methods that attempts to locate a 'robust' design - that is, a design that is least sensitive to uncontrollable influences on the design. Near-optimum minimum dry weight solutions are

  17. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    PubMed Central

    Smith, Justin D.

    2013-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have precluded widespread implementation and acceptance of the SCED as a viable complementary methodology to the predominant group design. This article includes a description of the research design, measurement, and analysis domains distinctive to the SCED; a discussion of the results within the framework of contemporary standards and guidelines in the field; and a presentation of updated benchmarks for key characteristics (e.g., baseline sampling, method of analysis), and overall, it provides researchers and reviewers with a resource for conducting and evaluating SCED research. The results of the systematic review of 409 studies suggest that recently published SCED research is largely in accordance with contemporary criteria for experimental quality. Analytic method emerged as an area of discord. Comparison of the findings of this review with historical estimates of the use of statistical analysis indicates an upward trend, but visual analysis remains the most common analytic method and also garners the most support amongst those entities providing SCED standards. Although consensus exists along key dimensions of single-case research design and researchers appear to be practicing within these parameters, there remains a need for further evaluation of assessment and sampling techniques and data analytic methods. PMID:22845874

  18. An experimental high energy therapeutic ultrasound equipment: design and characterisation.

    PubMed

    Kirkhorn, T; Almquist, L O; Persson, H W; Holmer, N G

    1997-05-01

    High energy ultrasound equipment for well controlled experimental work on extracorporeal shockwave lithotripsy (ESWL) and hyperthermia has been built. The design of two sets of equipment with operating frequencies of 0.5 and 1.6 MHz, respectively, is described and characterised in terms of measured generated pressure fields. The treatment heads consist of six or seven focused ultrasound transducers. The transducers have a diameter of 50 mm each and are mounted in a hemispherical Plexiglass fixture with a geometrical focus 100 mm from the transducer surfaces. Measurements were performed in a water bath in several planes perpendicular to the central axis of the ultrasound beam, using a miniature hydrophone which was positioned with a computer controlled stepping motor system. Resulting diagram plots show well defined pressure foci, located at the geometrical foci of the transducer units.

  19. A rationally designed CD4 analogue inhibits experimental allergic encephalomyelitis

    NASA Astrophysics Data System (ADS)

    Jameson, Bradford A.; McDonnell, James M.; Marini, Joseph C.; Korngold, Robert

    1994-04-01

    EXPERIMENTAL allergic encephalomyelitis (EAE) is an acute inflammatory autoimmune disease of the central nervous system that can be elicited in rodents and is the major animal model for the study of multiple sclerosis (MS)1,2. The pathogenesis of both EAE and MS directly involves the CD4+ helper T-cell subset3-5. Anti-CD4 monoclonal antibodies inhibit the development of EAE in rodents6-9, and are currently being used in human clinical trials for MS. We report here that similar therapeutic effects can be achieved in mice using a small (rationally designed) synthetic analogue of the CD4 protein surface. It greatly inhibits both clinical incidence and severity of EAE with a single injection, but does so without depletion of the CD4+ subset and without the inherent immunogenicity of an antibody. Furthermore, this analogue is capable of exerting its effects on disease even after the onset of symptoms.

  20. Statistics in experimental design, preprocessing, and analysis of proteomics data.

    PubMed

    Jung, Klaus

    2011-01-01

    High-throughput experiments in proteomics, such as 2-dimensional gel electrophoresis (2-DE) and mass spectrometry (MS), yield usually high-dimensional data sets of expression values for hundreds or thousands of proteins which are, however, observed on only a relatively small number of biological samples. Statistical methods for the planning and analysis of experiments are important to avoid false conclusions and to receive tenable results. In this chapter, the most frequent experimental designs for proteomics experiments are illustrated. In particular, focus is put on studies for the detection of differentially regulated proteins. Furthermore, issues of sample size planning, statistical analysis of expression levels as well as methods for data preprocessing are covered.

  1. Computational Design and Experimental Validation of New Thermal Barrier Systems

    SciTech Connect

    Guo, Shengmin; Yang, Shizhong; Khosravi, Ebrahim

    2011-12-31

    This project (10/01/2010-9/30/2013), “Computational Design and Experimental Validation of New Thermal Barrier Systems”, originates from Louisiana State University (LSU) Mechanical Engineering Department and Southern University (SU) Department of Computer Science. This proposal will directly support the technical goals specified in DE-FOA-0000248, Topic Area 3: Turbine Materials, by addressing key technologies needed to enable the development of advanced turbines and turbine-based systems that will operate safely and efficiently using coal-derived synthesis gases. We will develop novel molecular dynamics method to improve the efficiency of simulation on novel TBC materials; we will perform high performance computing (HPC) on complex TBC structures to screen the most promising TBC compositions; we will perform material characterizations and oxidation/corrosion tests; and we will demonstrate our new Thermal barrier coating (TBC) systems experimentally under Integrated gasification combined cycle (IGCC) environments. The durability of the coating will be examined using the proposed High Temperature/High Pressure Durability Test Rig under real syngas product compositions.

  2. A retrospective mathematical analysis of controlled release design and experimentation.

    PubMed

    Rothstein, Sam N; Kay, Jennifer E; Schopfer, Francisco J; Freeman, Bruce A; Little, Steven R

    2012-11-01

    The development and performance evaluation of new biodegradable polymer controlled release formulations relies on successful interpretation and evaluation of in vitro release data. However, depending upon the extent of empirical characterization, release data may be open to more than one qualitative interpretation. In this work, a predictive model for release from degradable polymer matrices was applied to a number of published release data in order to extend the characterization of release behavior. Where possible, the model was also used to interpolate and extrapolate upon collected released data to clarify the overall duration of release and also kinetics of release between widely spaced data points. In each case examined, mathematical predictions of release coincide well with experimental results, offering a more definitive description of each formulation's performance than was previously available. This information may prove particularly helpful in the design of future studies, such as when calculating proper dosing levels or determining experimental end points in order to more comprehensively evaluate a controlled release system's performance.

  3. Quiet Clean Short-Haul Experimental Engine (QSCEE). Preliminary analyses and design report, volume 1

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The experimental propulsion systems to be built and tested in the 'quiet, clean, short-haul experimental engine' program are presented. The flight propulsion systems are also presented. The following areas are discussed: acoustic design; emissions control; engine cycle and performance; fan aerodynamic design; variable-pitch actuation systems; fan rotor mechanical design; fan frame mechanical design; and reduction gear design.

  4. Estimating Intervention Effects across Different Types of Single-Subject Experimental Designs: Empirical Illustration

    ERIC Educational Resources Information Center

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M.; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S. Natasha; Van den Noortgate, Wim

    2015-01-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs…

  5. Experimental Charging Behavior of Orion UltraFlex Array Designs

    NASA Technical Reports Server (NTRS)

    Golofaro, Joel T.; Vayner, Boris V.; Hillard, Grover B.

    2010-01-01

    The present ground based investigations give the first definitive look describing the charging behavior of Orion UltraFlex arrays in both the Low Earth Orbital (LEO) and geosynchronous (GEO) environments. Note the LEO charging environment also applies to the International Space Station (ISS). The GEO charging environment includes the bounding case for all lunar mission environments. The UltraFlex photovoltaic array technology is targeted to become the sole power system for life support and on-orbit power for the manned Orion Crew Exploration Vehicle (CEV). The purpose of the experimental tests is to gain an understanding of the complex charging behavior to answer some of the basic performance and survivability issues to ascertain if a single UltraFlex array design will be able to cope with the projected worst case LEO and GEO charging environments. Stage 1 LEO plasma testing revealed that all four arrays successfully passed arc threshold bias tests down to -240 V. Stage 2 GEO electron gun charging tests revealed that only the front side area of indium tin oxide coated array designs successfully passed the arc frequency tests

  6. Experimental design considerations in microbiota/inflammation studies.

    PubMed

    Moore, Robert J; Stanley, Dragana

    2016-07-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  7. Large-scale experimental design for decentralized SLAM

    NASA Astrophysics Data System (ADS)

    Cunningham, Alex; Dellaert, Frank

    2012-06-01

    This paper presents an analysis of large scale decentralized SLAM under a variety of experimental conditions to illustrate design trade-offs relevant to multi-robot mapping in challenging environments. As a part of work through the MAST CTA, the focus of these robot teams is on the use of small-scale robots with limited sensing, communication and computational resources. To evaluate mapping algorithms with large numbers (50+) of robots, we developed a simulation incorporating sensing of unlabeled landmarks, line-of-sight blocking obstacles, and communication modeling. Scenarios are randomly generated with variable models for sensing, communication, and robot behavior. The underlying Decentralized Data Fusion (DDF) algorithm in these experiments enables robots to construct a map of their surroundings by fusing local sensor measurements with condensed map information from neighboring robots. Each robot maintains a cache of previously collected condensed maps from neighboring robots, and actively distributes these maps throughout the network to ensure resilience to communication and node failures. We bound the size of the robot neighborhoods to control the growth of the size of neighborhood maps. We present the results of experiments conducted in these simulated scenarios under varying measurement models and conditions while measuring mapping performance. We discuss the trade-offs between mapping performance and scenario design, including robot teams separating and joining, multi-robot data association, exploration bounding, and neighborhood sizes.

  8. Experimental design considerations in microbiota/inflammation studies

    PubMed Central

    Moore, Robert J; Stanley, Dragana

    2016-01-01

    There is now convincing evidence that many inflammatory diseases are precipitated, or at least exacerbated, by unfavourable interactions of the host with the resident microbiota. The role of gut microbiota in the genesis and progression of diseases such as inflammatory bowel disease, obesity, metabolic syndrome and diabetes have been studied both in human and in animal, mainly rodent, models of disease. The intrinsic variation in microbiota composition, both within one host over time and within a group of similarly treated hosts, presents particular challenges in experimental design. This review highlights factors that need to be taken into consideration when designing animal trials to investigate the gastrointestinal tract microbiota in the context of inflammation studies. These include the origin and history of the animals, the husbandry of the animals before and during experiments, details of sampling, sample processing, sequence data acquisition and bioinformatic analysis. Because of the intrinsic variability in microbiota composition, it is likely that the number of animals required to allow meaningful statistical comparisons across groups will be higher than researchers have generally used for purely immune-based analyses. PMID:27525065

  9. Computational design of an experimental laser-powered thruster

    NASA Technical Reports Server (NTRS)

    Jeng, San-Mou; Litchford, Ronald; Keefer, Dennis

    1988-01-01

    An extensive numerical experiment, using the developed computer code, was conducted to design an optimized laser-sustained hydrogen plasma thruster. The plasma was sustained using a 30 kW CO2 laser beam operated at 10.6 micrometers focused inside the thruster. The adopted physical model considers two-dimensional compressible Navier-Stokes equations coupled with the laser power absorption process, geometric ray tracing for the laser beam, and the thermodynamically equilibrium (LTE) assumption for the plasma thermophysical and optical properties. A pressure based Navier-Stokes solver using body-fitted coordinate was used to calculate the laser-supported rocket flow which consists of both recirculating and transonic flow regions. The computer code was used to study the behavior of laser-sustained plasmas within a pipe over a wide range of forced convection and optical arrangements before it was applied to the thruster design, and these theoretical calculations agree well with existing experimental results. Several different throat size thrusters operated at 150 and 300 kPa chamber pressure were evaluated in the numerical experiment. It is found that the thruster performance (vacuum specific impulse) is highly dependent on the operating conditions, and that an adequately designed laser-supported thruster can have a specific impulse around 1500 sec. The heat loading on the wall of the calculated thrusters were also estimated, and it is comparable to heat loading on the conventional chemical rocket. It was also found that the specific impulse of the calculated thrusters can be reduced by 200 secs due to the finite chemical reaction rate.

  10. Design, Evaluation and Experimental Effort Toward Development of a High Strain Composite Wing for Navy Aircraft

    NASA Technical Reports Server (NTRS)

    Bruno, Joseph; Libeskind, Mark

    1990-01-01

    This design development effort addressed significant technical issues concerning the use and benefits of high strain composite wing structures (Epsilon(sub ult) = 6000 micro-in/in) for future Navy aircraft. These issues were concerned primarily with the structural integrity and durability of the innovative design concepts and manufacturing techniques which permitted a 50 percent increase in design ultimate strain level (while maintaining the same fiber/resin system) as well as damage tolerance and survivability requirements. An extensive test effort consisting of a progressive series of coupon and major element tests was an integral part of this development effort, and culminated in the design, fabrication and test of a major full-scale wing box component. The successful completion of the tests demonstrated the structural integrity, durability and benefits of the design. Low energy impact testing followed by fatigue cycling verified the damage tolerance concepts incorporated within the structure. Finally, live fire ballistic testing confirmed the survivability of the design. The potential benefits of combining newer/emerging composite materials and new or previously developed high strain wing design to maximize structural efficiency and reduce fabrication costs was the subject of subsequent preliminary design and experimental evaluation effort.

  11. Logical Graphics Design Technique for Drawing Distribution Networks

    NASA Astrophysics Data System (ADS)

    Al-A`Ali, Mansoor

    Electricity distribution networks normally consist of tens of primary feeders, thousands of substations and switching stations spread over large geographical areas and thus require a complex system in order to manage them properly from within the distribution control centre. We show techniques for using Delphi Object Oriented components to automatically generate, display and manage graphically and logically the circuits of the network. The graphics components are dynamically interactive and thus the system allows switching operations as well as displays. The object oriented approach was developed to replace an older system, which used Microstation with MDL as the programming language and ORACLE as the DBMS. Before this, the circuits could only be displayed schematically, which has many inherent problems in speed and readability of large displays. Schematic graphics displays were cumbersome when adding or deleting stations; this problem is now resolved using our approach by logically generating the graphics from the database connectivity information. This paper demonstrates the method of designing these Object Oriented components and how they can be used in specially created algorithms to generate the necessary interactive graphics. Four different logical display algorithms were created and in this study we present samples of the four different outputs of these algorithms which prove that distribution engineers can work with logical display of the circuits which are aimed to speed up the switching operations and for better clarity of the display.

  12. Game Design Narrative for Learning: Appropriating Adventure Game Design Narrative Devices and Techniques for the Design of Interactive Learning Environments

    ERIC Educational Resources Information Center

    Dickey, Michele D.

    2006-01-01

    The purpose of this conceptual analysis is to investigate how contemporary video and computer games might inform instructional design by looking at how narrative devices and techniques support problem solving within complex, multimodal environments. Specifically, this analysis presents a brief overview of game genres and the role of narrative in…

  13. Experimental source characterization techniques for studying the acoustic properties of perforates under high level acoustic excitation.

    PubMed

    Bodén, Hans

    2011-11-01

    This paper discusses experimental techniques for obtaining the acoustic properties of in-duct samples with non-linear acoustic characteristic. The methods developed are intended both for studies of non-linear energy transfer to higher harmonics for samples only accessible from one side such as wall treatment in aircraft engine ducts or automotive exhaust systems and for samples accessible from both sides such as perforates or other top sheets. When harmonic sound waves are incident on the sample nonlinear energy transfer results in sound generation at higher harmonics at the sample (perforate) surface. The idea is that these sources can be characterized using linear system identification techniques similar to one-port or two-port techniques which are traditionally used for obtaining source data for in-duct sources such as IC-engines or fans. The starting point will be so called polyharmonic distortion modeling which is used for characterization of nonlinear properties of microwave systems. It will be shown how acoustic source data models can be expressed using this theory. Source models of different complexity are developed and experimentally tested. The results of the experimental tests show that these techniques can give results which are useful for understanding non-linear energy transfer to higher harmonics.

  14. A New Tour Design Technique to Enable an Enceladus Orbiter

    NASA Astrophysics Data System (ADS)

    Strange, N.; Campagnola, S.; Russell, R.

    2009-12-01

    As a result of discoveries made by the Cassini spacecraft, Saturn's moon Enceladus has emerged as a high science-value target for a future orbiter mission. [1] However, past studies of an Enceladus orbiter mission [2] found that entering Enceladus orbit either requires a prohibitively large orbit insertion ΔV (> 3.5 km/s) or a prohibitively long flight time. In order to reach Enceladus with a reasonable flight time and ΔV budget, a new tour design method has been developed that uses gravity-assists of the low-mass moons Rhea, Dione, and Tethys combined with v-infinity leveraging maneuvers. This new method can achieve Enceladus orbit with a combined leveraging and insertion ΔV of ~1 km/s and a 2.5 year Saturn tour. Among many challenges in designing a trajectory for an Enceladus mission, the two most prominent arise because Enceladus is a low mass moon (its GM is only ~7 km^2/s^2), deep within Saturn's gravity well (its orbit is at 4 Saturn radii). Designing ΔV-efficient rendezvous with Enceladus is the first challenge, while the second involves finding a stable orbit which can achieve the desired science measurements. A paper by Russell and Lara [3] has recently addressed the second problem, and a paper this past August by Strange, Campagnola, and Russell [4] has adressed the first. This method developed to solve the second problem, the leveraging tour, and the science possibilities of this trajectory will be the subject of this presentation. the new methods in [4], a leveraging tour with Titan, Rhea, Dione, and Tethys can reach Enceladus orbit with less than half of the ΔV of a direct Titan-Enceladus transfer. Starting from the TSSM Saturn arrival conditions [5], with a chemical bi-prop system, this new tour design technique could place into Enceladus orbit ~2800 kg compared to ~1100 kg from a direct Titan-Enceladus transfer. Moreover, the 2.5 year leveraging tour provides many low-speed and high science value flybys of Rhea, Dione, and Tethys. This exciting

  15. Synthetic tracked aperture ultrasound imaging: design, simulation, and experimental evaluation.

    PubMed

    Zhang, Haichong K; Cheng, Alexis; Bottenus, Nick; Guo, Xiaoyu; Trahey, Gregg E; Boctor, Emad M

    2016-04-01

    Ultrasonography is a widely used imaging modality to visualize anatomical structures due to its low cost and ease of use; however, it is challenging to acquire acceptable image quality in deep tissue. Synthetic aperture (SA) is a technique used to increase image resolution by synthesizing information from multiple subapertures, but the resolution improvement is limited by the physical size of the array transducer. With a large F-number, it is difficult to achieve high resolution in deep regions without extending the effective aperture size. We propose a method to extend the available aperture size for SA-called synthetic tracked aperture ultrasound (STRATUS) imaging-by sweeping an ultrasound transducer while tracking its orientation and location. Tracking information of the ultrasound probe is used to synthesize the signals received at different positions. Considering the practical implementation, we estimated the effect of tracking and ultrasound calibration error to the quality of the final beamformed image through simulation. In addition, to experimentally validate this approach, a 6 degree-of-freedom robot arm was used as a mechanical tracker to hold an ultrasound transducer and to apply in-plane lateral translational motion. Results indicate that STRATUS imaging with robotic tracking has the potential to improve ultrasound image quality. PMID:27088108

  16. Predicting experimental properties of proteins from sequence by machine learning techniques.

    PubMed

    Smialowski, Pawel; Martin-Galiano, Antonio J; Cox, Jürgen; Frishman, Dmitrij

    2007-04-01

    Efficient target selection methods are an important prerequisite for increasing the success rate and reducing the cost of high-throughput structural genomics efforts. There is a high demand for sequence-based methods capable of predicting experimentally tractable proteins and filtering out potentially difficult targets at different stages of the structural genomic pipeline. Simple empirical rules based on anecdotal evidence are being increasingly superseded by rigorous machine-learning algorithms. Although the simplicity of less advanced methods makes them more human understandable, more sophisticated formalized algorithms possess superior classification power. The quickly growing corpus of experimental success and failure data gathered by structural genomics consortia creates a unique opportunity for retrospective data mining using machine learning techniques and results in increased quality of classifiers. For example, the current solubility prediction methods are reaching the accuracy of over 70%. Furthermore, automated feature selection leads to better insight into the nature of the correlation between amino acid sequence and experimental outcome. In this review we summarize methods for predicting experimental success in cloning, expression, soluble expression, purification and crystallization of proteins with a special focus on publicly available resources. We also describe experimental data repositories and machine learning techniques used for classification and feature selection. PMID:17430194

  17. Experimental generation of longitudinally-modulated electron beams using an emittance exchange technique

    SciTech Connect

    Sun, Y.-E; Piot, P.; Johnson, A.; Lumpkin, A.; Maxwell, T.; Ruan, J.; Thurman-Keup, R.; /FERMILAB

    2010-08-01

    We report our experimental demonstration of longitudinal phase space modulation using a transverse-to-longitudinal emittance exchange technique. The experiment is carried out at the A0 photoinjector at Fermi National Accelerator Lab. A vertical multi-slit plate is inserted into the beamline prior to the emittance exchange, thus introducing beam horizontal profile modulation. After the emittance exchange, the longitudinal phase space coordinates (energy and time structures) of the beam are modulated accordingly. This is a clear demonstration of the transverse-to-longitudinal phase space exchange. In this paper, we present our experimental results on the measurement of energy profile as well as numerical simulations of the experiment.

  18. Analytical and experimental evaluation of techniques for the fabrication of thermoplastic hologram storage devices

    NASA Technical Reports Server (NTRS)

    Rogers, J. W.

    1975-01-01

    The results of an experimental investigation on recording information on thermoplastic are given. A description was given of a typical fabrication configuration, the recording sequence, and the samples which were examined. There are basically three configurations which can be used for the recording of information on thermoplastic. The most popular technique uses corona which furnishes free charge. The necessary energy for deformation is derived from a charge layer atop the thermoplastic. The other two techniques simply use a dc potential in place of the corona for deformation energy.

  19. Automated measurement of birefringence - Development and experimental evaluation of the techniques

    NASA Technical Reports Server (NTRS)

    Voloshin, A. S.; Redner, A. S.

    1989-01-01

    Traditional photoelasticity has started to lose its appeal since it requires a well-trained specialist to acquire and interpret results. A spectral-contents-analysis approach may help to revive this old, but still useful technique. Light intensity of the beam passed through the stressed specimen contains all the information necessary to automatically extract the value of retardation. This is done by using a photodiode array to investigate the spectral contents of the light beam. Three different techniques to extract the value of retardation from the spectral contents of the light are discussed and evaluated. An experimental system was built which demonstrates the ability to evaluate retardation values in real time.

  20. Solar Ion Sputter Deposition in the Lunar Regolith: Experimental Simulation Using Focused-Ion Beam Techniques

    NASA Technical Reports Server (NTRS)

    Christoffersen, R.; Rahman, Z.; Keller, L. P.

    2012-01-01

    As regions of the lunar regolith undergo space weathering, their component grains develop compositionally and microstructurally complex outer coatings or "rims" ranging in thickness from a few 10 s to a few 100's of nm. Rims on grains in the finest size fractions (e.g., <20 m) of mature lunar regoliths contain optically-active concentrations of nm size metallic Fe spherules, or "nanophase Fe(sup o)" that redden and attenuate optical reflectance spectral features important in lunar remote sensing. Understanding the mechanisms for rim formation is therefore a key part of connecting the drivers of mineralogical and chemical changes in the lunar regolith with how lunar terrains are observed to become space weathered from a remotely-sensed point of view. As interpreted based on analytical transmission electron microscope (TEM) studies, rims are produced from varying relative contributions from: 1) direct solar ion irradiation effects that amorphize or otherwise modify the outer surface of the original host grain, and 2) nanoscale, layer-like, deposition of extrinsic material processed from the surrounding soil. This extrinsic/deposited material is the dominant physical host for nanophase Fe(sup o) in the rims. An important lingering uncertainty is whether this deposited material condensed from regolith components locally vaporized in micrometeorite or larger impacts, or whether it formed as solar wind ions sputtered exposed soil and re-deposited the sputtered ions on less exposed areas. Deciding which of these mechanisms is dominant, or possibility exclusive, has been hampered because there is an insufficient library of chemical and microstructural "fingerprints" to distinguish deposits produced by the two processes. Experimental sputter deposition / characterization studies relevant to rim formation have particularly lagged since the early post-Apollo experiments of Hapke and others, especially with regard to application of TEM-based characterization techniques. Here

  1. Managing Model Data Introduced Uncertainties in Simulator Predictions for Generation IV Systems via Optimum Experimental Design

    SciTech Connect

    Turinsky, Paul J; Abdel-Khalik, Hany S; Stover, Tracy E

    2011-03-31

    An optimization technique has been developed to select optimized experimental design specifications to produce data specifically designed to be assimilated to optimize a given reactor concept. Data from the optimized experiment is assimilated to generate posteriori uncertainties on the reactor concept’s core attributes from which the design responses are computed. The reactor concept is then optimized with the new data to realize cost savings by reducing margin. The optimization problem iterates until an optimal experiment is found to maximize the savings. A new generation of innovative nuclear reactor designs, in particular fast neutron spectrum recycle reactors, are being considered for the application of closing the nuclear fuel cycle in the future. Safe and economical design of these reactors will require uncertainty reduction in basic nuclear data which are input to the reactor design. These data uncertainty propagate to design responses which in turn require the reactor designer to incorporate additional safety margin into the design, which often increases the cost of the reactor. Therefore basic nuclear data needs to be improved and this is accomplished through experimentation. Considering the high cost of nuclear experiments, it is desired to have an optimized experiment which will provide the data needed for uncertainty reduction such that a reactor design concept can meet its target accuracies or to allow savings to be realized by reducing the margin required due to uncertainty propagated from basic nuclear data. However, this optimization is coupled to the reactor design itself because with improved data the reactor concept can be re-optimized itself. It is thus desired to find the experiment that gives the best optimized reactor design. Methods are first established to model both the reactor concept and the experiment and to efficiently propagate the basic nuclear data uncertainty through these models to outputs. The representativity of the experiment

  2. Problem Solving Techniques for the Design of Algorithms.

    ERIC Educational Resources Information Center

    Kant, Elaine; Newell, Allen

    1984-01-01

    Presents model of algorithm design (activity in software development) based on analysis of protocols of two subjects designing three convex hull algorithms. Automation methods, methods for studying algorithm design, role of discovery in problem solving, and comparison of different designs of case study according to model are highlighted.…

  3. Design review of the Brazilian Experimental Solar Telescope

    NASA Astrophysics Data System (ADS)

    Dal Lago, A.; Vieira, L. E. A.; Albuquerque, B.; Castilho, B.; Guarnieri, F. L.; Cardoso, F. R.; Guerrero, G.; Rodríguez, J. M.; Santos, J.; Costa, J. E. R.; Palacios, J.; da Silva, L.; Alves, L. R.; Costa, L. L.; Sampaio, M.; Dias Silveira, M. V.; Domingues, M. O.; Rockenbach, M.; Aquino, M. C. O.; Soares, M. C. R.; Barbosa, M. J.; Mendes, O., Jr.; Jauer, P. R.; Branco, R.; Dallaqua, R.; Stekel, T. R. C.; Pinto, T. S. N.; Menconi, V. E.; Souza, V. M. C. E. S.; Gonzalez, W.; Rigozo, N.

    2015-12-01

    The Brazilian's National Institute for Space Research (INPE), in collaboration with the Engineering School of Lorena/University of São Paulo (EEL/USP), the Federal University of Minas Gerais (UFMG), and the Brazilian's National Laboratory for Astrophysics (LNA), is developing a solar vector magnetograph and visible-light imager to study solar processes through observations of the solar surface magnetic field. The Brazilian Experimental Solar Telescope is designed to obtain full disk magnetic field and line-of-sight velocity observations in the photosphere. Here we discuss the system requirements and the first design review of the instrument. The instrument is composed by a Ritchey-Chrétien telescope with a 500 mm aperture and 4000 mm focal length. LCD polarization modulators will be employed for the polarization analysis and a tuning Fabry-Perot filter for the wavelength scanning near the Fe II 630.25 nm line. Two large field-of-view, high-resolution 5.5 megapixel sCMOS cameras will be employed as sensors. Additionally, we describe the project management and system engineering approaches employed in this project. As the magnetic field anchored at the solar surface produces most of the structures and energetic events in the upper solar atmosphere and significantly influences the heliosphere, the development of this instrument plays an important role in advancing scientific knowledge in this field. In particular, the Brazilian's Space Weather program will benefit most from the development of this technology. We expect that this project will be the starting point to establish a strong research program on Solar Physics in Brazil. Our main aim is to progressively acquire the know-how to build state-of-art solar vector magnetograph and visible-light imagers for space-based platforms.

  4. Behavior Modification Techniques for Teachers of the Developmentally Young. Experimental Version.

    ERIC Educational Resources Information Center

    Anderson, Daniel R.; And Others

    The text is designed for a course in the application of behavior modification techniques with handicapped children. The first part considers eight elements of the behavioral approach, including describing behavior precisely, recording behavior, arranging consequences, and teaching new behaviors. The second section, a practicum outline, provides…

  5. Design technique for nonlinear phase SAW filters using slanted finger interdigital transducers.

    PubMed

    Yatsuda, H

    1998-01-01

    This paper describes a useful design technique to achieve a nonlinear phase SAW filter using slanted finger interdigital transducers (SFITs) or tapered interdigital transducers which are suitable for wide-band filters in intermediate frequency stages. A required nonlinear phase response in the passband can be obtained by changing center-to-center distances between input and output SFITs along an axis perpendicular to the SAW propagation axis. The design is based on a building-block approach in the frequency domain. A nonlinear phase SAW filter with a center frequency of 70 MHz and a fractional bandwidth of about 10% is demonstrated on x-cut 112.2 degrees y-propagating LiTaO(3 ). Because the substrate has a power flow angle of 1.55 degrees, the SFIT pattern is tilted along that angle. Good agreement between theoretical and experimental results is obtained. PMID:18244156

  6. Columbus meteoroid/debris protection study - Experimental simulation techniques and results

    NASA Astrophysics Data System (ADS)

    Schneider, E.; Kitta, K.; Stilp, A.; Lambert, M.; Reimerdes, H. G.

    1992-08-01

    The methods and measurement techniques used in experimental simulations of micrometeoroid and space debris impacts with the ESA's laboratory module Columbus are described. Experiments were carried out at the two-stage light gas gun acceleration facilities of the Ernst-Mach Institute. Results are presented on simulations of normal impacts on bumper systems, oblique impacts on dual bumper systems, impacts into cooled targets, impacts into pressurized targets, and planar impacts of low-density projectiles.

  7. Plackett-Burman experimental design to facilitate syntactic foam development

    DOE PAGES

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less

  8. Experimental designs for testing differences in survival among salmonid populations

    SciTech Connect

    Hoffmann, A.; Busack, C.; Knudsen, C.

    1995-03-01

    The Yakima Fisheries Project (YFP) is a supplementation plan for enhancing salmon runs in the Yakima River basin. It is presumed that inadequate spawning and rearing, habitat are limiting, factors to population abundance of spring chinook salmon. Therefore, the supplementation effort for spring chinook salmon is focused on introducing hatchery-raised smolts into the basin to compensate for the lack of spawning habitat. However, based on empirical evidence in the Yakima basin, hatchery-reared salmon have survived poorly compared to wild salmon. Therefore, the YFP has proposed to alter the optimal conventional treatment (OCT), which is the state-of-the-art hatchery rearing method, to a new innovative treatment (NIT). The NIT is intended to produce hatchery fish that mimic wild fish and thereby to enhance their survival over that of OCT fish. A limited application of the NIT (LNIT) has also been proposed to reduce the cost of applying the new treatment, yet retain the benefits of increased survival. This research was conducted to test whether the uncertainty using the experimental design was within the limits specified by the Planning Status Report (PSR).

  9. Validation of a buffet meal design in an experimental restaurant.

    PubMed

    Allirot, Xavier; Saulais, Laure; Disse, Emmanuel; Roth, Hubert; Cazal, Camille; Laville, Martine

    2012-06-01

    We assessed the reproducibility of intakes and meal mechanics parameters (cumulative energy intake (CEI), number of bites, bite rate, mean energy content per bite) during a buffet meal designed in a natural setting, and their sensitivity to food deprivation. Fourteen men were invited to three lunch sessions in an experimental restaurant. Subjects ate their regular breakfast before sessions A and B. They skipped breakfast before session FAST. The same ad libitum buffet was offered each time. Energy intakes and meal mechanics were assessed by foods weighing and video recording. Intrasubject reproducibility was evaluated by determining intraclass correlation coefficients (ICC). Mixed-models were used to assess the effects of the sessions on CEI. We found a good reproducibility between A and B for total energy (ICC=0.82), carbohydrate (ICC=0.83), lipid (ICC=0.81) and protein intake (ICC=0.79) and for meal mechanics parameters. Total energy, lipid and carbohydrate intake were higher in FAST than in A and B. CEI were found sensitive to differences in hunger level while the other meal mechanics parameters were stable between sessions. In conclusion, a buffet meal in a normal eating environment is a valid tool for assessing the effects of interventions on intakes.

  10. Plackett-Burman experimental design to facilitate syntactic foam development

    SciTech Connect

    Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; Cordes, Nikolaus L.; Welch, Cynthia F.; Torres, Joseph A.; Goodwin, Lynne A.; Pacheco, Robin M.; Sandoval, Cynthia W.

    2015-09-14

    The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix and the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.

  11. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation.

  12. Experimental design and Bayesian networks for enhancement of delta-endotoxin production by Bacillus thuringiensis.

    PubMed

    Ennouri, Karim; Ayed, Rayda Ben; Hassen, Hanen Ben; Mazzarello, Maura; Ottaviani, Ennio

    2015-12-01

    Bacillus thuringiensis (Bt) is a Gram-positive bacterium. The entomopathogenic activity of Bt is related to the existence of the crystal consisting of protoxins, also called delta-endotoxins. In order to optimize and explain the production of delta-endotoxins of Bacillus thuringiensis kurstaki, we studied seven medium components: soybean meal, starch, KH₂PO₄, K₂HPO₄, FeSO₄, MnSO₄, and MgSO₄and their relationships with the concentration of delta-endotoxins using an experimental design (Plackett-Burman design) and Bayesian networks modelling. The effects of the ingredients of the culture medium on delta-endotoxins production were estimated. The developed model showed that different medium components are important for the Bacillus thuringiensis fermentation. The most important factors influenced the production of delta-endotoxins are FeSO₄, K2HPO₄, starch and soybean meal. Indeed, it was found that soybean meal, K₂HPO₄, KH₂PO₄and starch also showed positive effect on the delta-endotoxins production. However, FeSO4 and MnSO4 expressed opposite effect. The developed model, based on Bayesian techniques, can automatically learn emerging models in data to serve in the prediction of delta-endotoxins concentrations. The constructed model in the present study implies that experimental design (Plackett-Burman design) joined with Bayesian networks method could be used for identification of effect variables on delta-endotoxins variation. PMID:26689874

  13. Optimization of single-walled carbon nanotube solubility by noncovalent PEGylation using experimental design methods.

    PubMed

    Hadidi, Naghmeh; Kobarfard, Farzad; Nafissi-Varcheh, Nastaran; Aboofazeli, Reza

    2011-01-01

    In this study, noncovalent functionalization of single-walled carbon nanotubes (SWCNTs) with phospholipid-polyethylene glycols (Pl-PEGs) was performed to improve the solubility of SWCNTs in aqueous solution. Two kinds of PEG derivatives, ie, Pl-PEG 2000 and Pl-PEG 5000, were used for the PEGylation process. An experimental design technique (D-optimal design and second-order polynomial equations) was applied to investigate the effect of variables on PEGylation and the solubility of SWCNTs. The type of PEG derivative was selected as a qualitative parameter, and the PEG/SWCNT weight ratio and sonication time were applied as quantitative variables for the experimental design. Optimization was performed for two responses, aqueous solubility and loading efficiency. The grafting of PEG to the carbon nanostructure was determined by thermogravimetric analysis, Raman spectroscopy, and scanning electron microscopy. Aqueous solubility and loading efficiency were determined by ultraviolet-visible spectrophotometry and measurement of free amine groups, respectively. Results showed that Pl-PEGs were grafted onto SWCNTs. Aqueous solubility of 0.84 mg/mL and loading efficiency of nearly 98% were achieved for the prepared Pl-PEG 5000-SWCNT conjugates. Evaluation of functionalized SWCNTs showed that our noncovalent functionalization protocol could considerably increase aqueous solubility, which is an essential criterion in the design of a carbon nanotube-based drug delivery system and its biodistribution.

  14. Experimental study of liquid level gauge for liquid hydrogen using Helmholtz resonance technique

    NASA Astrophysics Data System (ADS)

    Nakano, Akihiro; Nishizu, Takahisa

    2016-07-01

    The Helmholtz resonance technique was applied to a liquid level gauge for liquid hydrogen to confirm the applicability of the technique in the cryogenic industrial field. A specially designed liquid level gauge that has a Helmholtz resonator with a small loudspeaker was installed in a glass cryostat. A swept frequency signal was supplied to the loudspeaker, and the acoustic response was detected by measuring the electrical impedance of the loudspeaker's voice coil. The penetration depth obtained from the Helmholtz resonance frequency was compared with the true value, which was read from a scale. In principle, the Helmholtz resonance technique is available for use with liquid hydrogen, however there are certain problems as regards practical applications. The applicability of the Helmholtz resonance technique to liquid hydrogen is discussed in this study.

  15. Experimental comparison between speckle and grating-based imaging technique using synchrotron radiation X-rays.

    PubMed

    Kashyap, Yogesh; Wang, Hongchang; Sawhney, Kawal

    2016-08-01

    X-ray phase contrast and dark-field imaging techniques provide important and complementary information that is inaccessible to the conventional absorption contrast imaging. Both grating-based imaging (GBI) and speckle-based imaging (SBI) are able to retrieve multi-modal images using synchrotron as well as lab-based sources. However, no systematic comparison has been made between the two techniques so far. We present an experimental comparison between GBI and SBI techniques with synchrotron radiation X-ray source. Apart from the simple experimental setup, we find SBI does not suffer from the issue of phase unwrapping, which can often be problematic for GBI. In addition, SBI is also superior to GBI since two orthogonal differential phase gradients can be simultaneously extracted by one dimensional scan. The GBI has less stringent requirements for detector pixel size and transverse coherence length when a second or third grating can be used. This study provides the reference for choosing the most suitable technique for diverse imaging applications at synchrotron facility.

  16. Experimental Manipulation of a Non-Neutral Ion Plasma Using FT-ICR Techniques

    NASA Astrophysics Data System (ADS)

    Williams, Chad; Peterson, Bryan

    2010-10-01

    The goal of our project is to experimentally determine the half life of beryllium-7. We plan to do this by singly ionizing beryllium atoms and containing them in a non-neutral plasma state as they decay. In order to correctly make this measurement, however, we need a clean plasma of high density containing solely Be-7 atoms. Due to the variable amounts of impurities in the Be-7 samples produced in our lab, it is necessary to implement the technique of Fourier Transform Ion Cyclotron Resonance (FT-ICR). By exciting the cyclotron radius of these particles trapped in a magnetic field we seek to expel these impurities from the plasma, leaving pure Be-7. Also, a technique has been developed for successfully stacking multiple pulses of plasma inside of our Malmberg-Penning trap. Recent changes in the internal structure of trap confinement rings will grant us greater efficiency in the use of these techniques.

  17. Optimizing an experimental design for a CSEM experiment: methodology and synthetic tests

    NASA Astrophysics Data System (ADS)

    Roux, E.; Garcia, X.

    2014-04-01

    Optimizing an experimental design is a compromise between maximizing information we get about the target and limiting the cost of the experiment, providing a wide range of constraints. We present a statistical algorithm for experiment design that combines the use of linearized inverse theory and stochastic optimization technique. Linearized inverse theory is used to quantify the quality of one given experiment design while genetic algorithm (GA) enables us to examine a wide range of possible surveys. The particularity of our algorithm is the use of the multi-objective GA NSGA II that searches designs that fit several objective functions (OFs) simultaneously. This ability of NSGA II is helping us in defining an experiment design that focuses on a specified target area. We present a test of our algorithm using a 1-D electrical subsurface structure. The model we use represents a simple but realistic scenario in the context of CO2 sequestration that motivates this study. Our first synthetic test using a single OF shows that a limited number of well-distributed observations from a chosen design have the potential to resolve the given model. This synthetic test also points out the importance of a well-chosen OF, depending on our target. In order to improve these results, we show how the combination of two OFs using a multi-objective GA enables us to determine an experimental design that maximizes information about the reservoir layer. Finally, we present several tests of our statistical algorithm in more challenging environments by exploring the influence of noise, specific site characteristics or its potential for reservoir monitoring.

  18. Optimization of Experimental Design for Estimating Groundwater Pumping Using Model Reduction

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Cheng, W.; Yeh, W. W.

    2012-12-01

    An optimal experimental design algorithm is developed to choose locations for a network of observation wells for estimating unknown groundwater pumping rates in a confined aquifer. The design problem can be expressed as an optimization problem which employs a maximal information criterion to choose among competing designs subject to the specified design constraints. Because of the combinatorial search required in this optimization problem, given a realistic, large-scale groundwater model, the dimensionality of the optimal design problem becomes very large and can be difficult if not impossible to solve using mathematical programming techniques such as integer programming or the Simplex with relaxation. Global search techniques, such as Genetic Algorithms (GAs), can be used to solve this type of combinatorial optimization problem; however, because a GA requires an inordinately large number of calls of a groundwater model, this approach may still be infeasible to use to find the optimal design in a realistic groundwater model. Proper Orthogonal Decomposition (POD) is therefore applied to the groundwater model to reduce the model space and thereby reduce the computational burden of solving the optimization problem. Results for a one-dimensional test case show identical results among using GA, integer programming, and an exhaustive search demonstrating that GA is a valid method for use in a global optimum search and has potential for solving large-scale optimal design problems. Additionally, other results show that the algorithm using GA with POD model reduction is several orders of magnitude faster than an algorithm that employs GA without POD model reduction in terms of time required to find the optimal solution. Application of the proposed methodology is being made to a large-scale, real-world groundwater problem.

  19. Simulation verification techniques study: Simulation self test hardware design and techniques report

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The final results are presented of the hardware verification task. The basic objectives of the various subtasks are reviewed along with the ground rules under which the overall task was conducted and which impacted the approach taken in deriving techniques for hardware self test. The results of the first subtask and the definition of simulation hardware are presented. The hardware definition is based primarily on a brief review of the simulator configurations anticipated for the shuttle training program. The results of the survey of current self test techniques are presented. The data sources that were considered in the search for current techniques are reviewed, and results of the survey are presented in terms of the specific types of tests that are of interest for training simulator applications. Specifically, these types of tests are readiness tests, fault isolation tests and incipient fault detection techniques. The most applicable techniques were structured into software flows that are then referenced in discussions of techniques for specific subsystems.

  20. Design of a digital compression technique for shuttle television

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Fultz, G.

    1976-01-01

    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power.

  1. Application of Soft Computing Techniques to Experimental Space Plasma Turbulence Observations - Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Bates, I.; Lawton, A.; Breikin, T.; Dunlop, M.

    Space Systems Group, University of Sheffield, U.K. Automatic Control and Systems Engineering, University of Sheffield, U.K. 3 Imperial College, London, U.K.A Genetic Algorithm (GA) approach is presented to solve a problem for turbulent space plasma system modelling in the form of Generalised Frequency Response Functions (GFRFs), using in-situ multi-satellite magnetic field measurements of the plasma turbulence. Soft Computing techniques have now been used for many years in Industry for nonlinear system identification. These techniques approach the problem of understanding a system, e.g. a chemical plant or a jet engine, by model structure selection and fitting parameters of the chosen model for the system using measured inputs and outputs of the system, which can then be used to determine physical characteristics of the system. GAs are one such technique that has been developed, providing essentially a series of solutions that evolve in a way to improve the model. Experimental space plasma turbulence studies have benefited from these System Identification techniques. Multi-point satellite observations provide input and output measurements of the turbulent plasma system. In previous work it was found natural to fit parameters to GFRFs, which derive from Volterra series and lead to quantitative measurements of linear wave-field growth and higher order wave-wave interactions. In previous work these techniques were applied using a Least Squares (LS) parameter fit. Results using GAs are compared to results obtained from the LS approach.

  2. City Connects: Building an Argument for Effects on Student Achievement with a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Walsh, Mary; Raczek, Anastasia; Sibley, Erin; Lee-St. John, Terrence; An, Chen; Akbayin, Bercem; Dearing, Eric; Foley, Claire

    2015-01-01

    While randomized experimental designs are the gold standard in education research concerned with causal inference, non-experimental designs are ubiquitous. For researchers who work with non-experimental data and are no less concerned for causal inference, the major problem is potential omitted variable bias. In this presentation, the authors…

  3. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Conditions of an Experimental Permit § 437.85 Allowable design changes; modification of an experimental... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE...

  4. 14 CFR 437.85 - Allowable design changes; modification of an experimental permit.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Conditions of an Experimental Permit § 437.85 Allowable design changes; modification of an experimental... 14 Aeronautics and Space 4 2011-01-01 2011-01-01 false Allowable design changes; modification of an experimental permit. 437.85 Section 437.85 Aeronautics and Space COMMERCIAL SPACE...

  5. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.

  6. Artificial tektites: an experimental technique for capturing the shapes of spinning drops.

    PubMed

    Baldwin, Kyle A; Butler, Samuel L; Hill, Richard J A

    2015-01-07

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or 'dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax 'artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  7. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or `dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax `artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation.

  8. Artificial tektites: an experimental technique for capturing the shapes of spinning drops.

    PubMed

    Baldwin, Kyle A; Butler, Samuel L; Hill, Richard J A

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or 'dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax 'artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation. PMID:25564381

  9. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    PubMed Central

    Baldwin, Kyle A.; Butler, Samuel L.; Hill, Richard J. A.

    2015-01-01

    Determining the shapes of a rotating liquid droplet bound by surface tension is an archetypal problem in the study of the equilibrium shapes of a spinning and charged droplet, a problem that unites models of the stability of the atomic nucleus with the shapes of astronomical-scale, gravitationally-bound masses. The shapes of highly deformed droplets and their stability must be calculated numerically. Although the accuracy of such models has increased with the use of progressively more sophisticated computational techniques and increases in computing power, direct experimental verification is still lacking. Here we present an experimental technique for making wax models of these shapes using diamagnetic levitation. The wax models resemble splash-form tektites, glassy stones formed from molten rock ejected from asteroid impacts. Many tektites have elongated or ‘dumb-bell' shapes due to their rotation mid-flight before solidification, just as we observe here. Measurements of the dimensions of our wax ‘artificial tektites' show good agreement with equilibrium shapes calculated by our numerical model, and with previous models. These wax models provide the first direct experimental validation for numerical models of the equilibrium shapes of spinning droplets, of importance to fundamental physics and also to studies of tektite formation. PMID:25564381

  10. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  11. Aberration Theory - A Spectrum Of Design Techniques For The Perplexed

    NASA Astrophysics Data System (ADS)

    Shafer, David

    1986-10-01

    The early medieval scholar Maimonides wrote a famous book called "Guide for the Perplexed", which explained various thorny philosophical and religious questions for the benefit of the puzzled novice. I wish I had had such a person to guide me when I first started a career in lens design. There the novice is often struck by how much of an "art" this endeavor is. The best bet, for a beginner with no experience, should be to turn to optical aberration theory - which, in principle, should explain much of what goes into designing an optical system. Unfortunately, this subject is usually presented in the form of proofs and derivations, with little time spent on the practical implications of aberration theory. Furthermore, a new generation of lens designers, who grew up with the computer, often consider aberration theory as an unnecessary relic from the past. My career, by contrast, is based on the conviction that using the results of aberration theory is the only intelligent way to design optical systems. Computers are an invaluable aide, but we must, ultimately, bite the bullet and think. Along these lines, I have given several papers over the last few years which deal directly with the philosophy of lens design; the kind of guides for the perplexed that I wished I had had from the start. These papers include: "Lens design on a desert island - A simple method of optical design", "A modular method of optical design", "Optical design with air lenses", "Optical design with 'phantom' aspherics", "Optical design methods: your head as a personal computer", "Aberration theory and the meaning of life", and a paper at Innsbruck - "Some interesting correspondences in aberration theory". In all cases, the emphasis is on using your head to think, and the computer to help you out with the numerical work and the "fine-tuning" of a design. To hope that the computer will do the thinking for you is folly. Solutions gained by this route rarely equal the results of an experienced and

  12. The estimation technique of the airframe design for manufacturability

    NASA Astrophysics Data System (ADS)

    Govorkov, A.; Zhilyaev, A.

    2016-04-01

    This paper discusses the method of quantitative estimation of a design for manufacturability of the parts of the airframe. The method is based on the interaction of individual indicators considering the weighting factor. The authors of the paper introduce the algorithm of the design for manufacturability of parts based on its 3D model

  13. Experimental verification of a computational technique for determining ground reactions in human bipedal stance.

    PubMed

    Audu, Musa L; Kirsch, Robert F; Triolo, Ronald J

    2007-01-01

    We have developed a three-dimensional (3D) biomechanical model of human standing that enables us to study the mechanisms of posture and balance simultaneously in various directions in space. Since the two feet are on the ground, the system defines a kinematically closed-chain which has redundancy problems that cannot be resolved using the laws of mechanics alone. We have developed a computational (optimization) technique that avoids the problems with the closed-chain formulation thus giving users of such models the ability to make predictions of joint moments, and potentially, muscle activations using more sophisticated musculoskeletal models. This paper describes the experimental verification of the computational technique that is used to estimate the ground reaction vector acting on an unconstrained foot while the other foot is attached to the ground, thus allowing human bipedal standing to be analyzed as an open-chain system. The computational approach was verified in terms of its ability to predict lower extremity joint moments derived from inverse dynamic simulations performed on data acquired from four able-bodied volunteers standing in various postures on force platforms. Sensitivity analyses performed with model simulations indicated which ground reaction force (GRF) and center of pressure (COP) components were most critical for providing better estimates of the joint moments. Overall, the joint moments predicted by the optimization approach are strongly correlated with the joint moments computed using the experimentally measured GRF and COP (0.78 < or = r(2) < or = 0.99,median,0.96) with a best-fit that was not statistically different from a straight line with unity slope (experimental=computational results) for postures of the four subjects examined. These results indicate that this model-based technique can be relied upon to predict reasonable and consistent estimates of the joint moments using the predicted GRF and COP for most standing postures.

  14. Study of an experimental technique for application to structural dynamic problems

    NASA Technical Reports Server (NTRS)

    Snell, R. F.

    1973-01-01

    An experimental program was conducted to determine the feasibility of using subscale plastic models to determine the response of full-scale aerospace structural components to impulsive, pyrotechnic loadings. A monocoque cylinder was impulsively loaded around the circumference of one end, causing a compressive stress wave to propagate in the axial direction. The resulting structural responses of two configurations of the cylinder (with and without a cutout) were recorded by photoelasticity, strain gages, and accelerometers. A maximum dynamic stress concentration was photoelastically determined and the accelerations calculated from strain-gage data were in good agreement with those recorded by accelerometers. It is concluded that reliable, quantitative structural response data can be obtained by the experimental techniques described in this report.

  15. Artificial Warming of Arctic Meadow under Pollution Stress: Experimental design

    NASA Astrophysics Data System (ADS)

    Moni, Christophe; Silvennoinen, Hanna; Fjelldal, Erling; Brenden, Marius; Kimball, Bruce; Rasse, Daniel

    2014-05-01

    Boreal and arctic terrestrial ecosystems are central to the climate change debate, notably because future warming is expected to be disproportionate as compared to world averages. Likewise, greenhouse gas (GHG) release from terrestrial ecosystems exposed to climate warming is expected to be the largest in the arctic. Artic agriculture, in the form of cultivated grasslands, is a unique and economically relevant feature of Northern Norway (e.g. Finnmark Province). In Eastern Finnmark, these agro-ecosystems are under the additional stressor of heavy metal and sulfur pollution generated by metal smelters of NW Russia. Warming and its interaction with heavy metal dynamics will influence meadow productivity, species composition and GHG emissions, as mediated by responses of soil microbial communities. Adaptation and mitigation measurements will be needed. Biochar application, which immobilizes heavy metal, is a promising adaptation method to promote positive growth response in arctic meadows exposed to a warming climate. In the MeadoWarm project we conduct an ecosystem warming experiment combined to biochar adaptation treatments in the heavy-metal polluted meadows of Eastern Finnmark. In summary, the general objective of this study is twofold: 1) to determine the response of arctic agricultural ecosystems under environmental stress to increased temperatures, both in terms of plant growth, soil organisms and GHG emissions, and 2) to determine if biochar application can serve as a positive adaptation (plant growth) and mitigation (GHG emission) strategy for these ecosystems under warming conditions. Here, we present the experimental site and the designed open-field warming facility. The selected site is an arctic meadow located at the Svanhovd Research station less than 10km west from the Russian mining city of Nikel. A splitplot design with 5 replicates for each treatment is used to test the effect of biochar amendment and a 3oC warming on the Arctic meadow. Ten circular

  16. Experimental Technique and Assessment for Measuring the Convective Heat Transfer Coefficient from Natural Ice Accretions

    NASA Technical Reports Server (NTRS)

    Masiulaniec, K. Cyril; Vanfossen, G. James, Jr.; Dewitt, Kenneth J.; Dukhan, Nihad

    1995-01-01

    A technique was developed to cast frozen ice shapes that had been grown on a metal surface. This technique was applied to a series of ice shapes that were grown in the NASA Lewis Icing Research Tunnel on flat plates. Nine flat plates, 18 inches square, were obtained from which aluminum castings were made that gave good ice shape characterizations. Test strips taken from these plates were outfitted with heat flux gages, such that when placed in a dry wind tunnel, can be used to experimentally map out the convective heat transfer coefficient in the direction of flow from the roughened surfaces. The effects on the heat transfer coefficient for both parallel and accelerating flow will be studied. The smooth plate model verification baseline data as well as one ice roughened test case are presented.

  17. New experimental method for lidar overlap factor using a CCD side-scatter technique.

    PubMed

    Wang, Zhenzhu; Tao, Zongming; Liu, Dong; Wu, Decheng; Xie, Chenbo; Wang, Yingjian

    2015-04-15

    In theory, lidar overlap factor can be derived from the difference between the particle backscatter coefficient retrieved from lidar elastic signal without overlap correction and the actual particle backscatter coefficient, which can be obtained by other measured techniques. The side-scatter technique using a CCD camera is testified to be a powerful tool to detect the particle backscatter coefficient in near ground layer during night time. A new experiment approach to determine the overlap factor for vertically pointing lidar is presented in this study, which can be applied to Mie lidars. The effect of overlap factor on Mie lidar is corrected by an iteration algorithm combining the retrieved particle backscatter coefficient using CCD side-scatter method and Fernald method. This method has been successfully applied to Mie lidar measurements during a routine campaign, and the comparison of experimental results in different atmosphere conditions demonstrated that this method is available in practice.

  18. Experimental, theoretical and computational study of frequency upshift of electromagnetic radiation using plasma techniques

    SciTech Connect

    Joshi, C.

    1992-09-01

    This is a second year progress report on Experimental, Theoretical and Computational Study of Frequency Upshift of Electromagnetic Radiation Using Plasma Techniques.'' The highlights are: (I) Ionization fronts have been shown to frequency upshift e.m. radiation by greater than a factor 5. In the experiments, 33 GHz microwave radiation is upshifted to more than 175 GHz using a relativistically propagating ionization front created by a laser beam. (II) A Letter describing the results has been published in Physical Review Letters and an invited'' paper has been submitted to IEEE Trans. in Plasma Science.

  19. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    PubMed

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  20. Cryogenic refractor design techniques. [for Infrared Astronomy Satellite

    NASA Technical Reports Server (NTRS)

    Darnell, R. J.

    1985-01-01

    The Infrared Astronomical Satellite (IRAS) was designed to operate at 2K, and over the spectral range of 8 to 120 micrometers. The focal plane is approximately 2 by 3 inches in size, and contains 62 individual field stop apertures, each with its own field lens, one or more filters and a detector. The design of the lenses involved a number of difficulties and challenges that are not usually encountered in optical design. Operating temperature is assumed during the design phase, which requires reliable information on dN/dT (Index Coefficient) for the materials. The optics and all supporting structures are then expanded to room temperature, which requires expansion coefficient data on the various materials, and meticulous attention to detail. The small size and dense packaging, as well as the high precision required, further contributed to the magnitude of the task.

  1. INNOVATIVE TECHNIQUE TO EVALUATE LINT CLEANER GRID BAR DESIGNS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Photographic techniques were used to show the path that fibers attached to a gin saw take as they are drawn over a lint cleaner cleaning grid bar. A 1979 study showed that fibers were swept backwards, closer to the saw, as saw speed increased. The angle between the tip of the saw tooth and the fib...

  2. SSSFD manipulator engineering using statistical experiment design techniques

    NASA Technical Reports Server (NTRS)

    Barnes, John

    1991-01-01

    The Satellite Servicer System Flight Demonstration (SSSFD) program is a series of Shuttle flights designed to verify major on-orbit satellite servicing capabilities, such as rendezvous and docking of free flyers, Orbital Replacement Unit (ORU) exchange, and fluid transfer. A major part of this system is the manipulator system that will perform the ORU exchange. The manipulator must possess adequate toolplate dexterity to maneuver a variety of EVA-type tools into position to interface with ORU fasteners, connectors, latches, and handles on the satellite, and to move workpieces and ORUs through 6 degree of freedom (dof) space from the Target Vehicle (TV) to the Support Module (SM) and back. Two cost efficient tools were combined to perform a study of robot manipulator design parameters. These tools are graphical computer simulations and Taguchi Design of Experiment methods. Using a graphics platform, an off-the-shelf robot simulation software package, and an experiment designed with Taguchi's approach, the sensitivities of various manipulator kinematic design parameters to performance characteristics are determined with minimal cost.

  3. Experimental verification of Space Platform battery discharger design optimization

    NASA Technical Reports Server (NTRS)

    Sable, Dan M.; Deuty, Scott; Lee, Fred C.; Cho, Bo H.

    1991-01-01

    The detailed design of two candidate topologies for the Space Platform battery discharger, a four module boost converter (FMBC) and a voltage-fed push-pull autotransformer (VFPPAT), is presented. Each has unique problems. The FMBC requires careful design and analysis in order to obtain good dynamic performance. This is due to the presence of a right-half-plane (RHP) zero in the control-to-output transfer function. The VFPPAT presents a challenging power stage design in order to yield high efficiency and light component weight. The authors describe the design of each of these converters and compare their efficiency, weight, and dynamic characteristics.

  4. C-MOS array design techniques: SUMC multiprocessor system study

    NASA Technical Reports Server (NTRS)

    Clapp, W. A.; Helbig, W. A.; Merriam, A. S.

    1972-01-01

    The current capabilities of LSI techniques for speed and reliability, plus the possibilities of assembling large configurations of LSI logic and storage elements, have demanded the study of multiprocessors and multiprocessing techniques, problems, and potentialities. Evaluated are three previous systems studies for a space ultrareliable modular computer multiprocessing system, and a new multiprocessing system is proposed that is flexibly configured with up to four central processors, four 1/0 processors, and 16 main memory units, plus auxiliary memory and peripheral devices. This multiprocessor system features a multilevel interrupt, qualified S/360 compatibility for ground-based generation of programs, virtual memory management of a storage hierarchy through 1/0 processors, and multiport access to multiple and shared memory units.

  5. Robust control design techniques for active flutter suppression

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay; Bachmann, Glen R.

    1994-01-01

    In this paper, an active flutter suppression problem is studied for a thin airfoil in unsteady aerodynamics. The mathematical model of this system is infinite dimensional because of Theodorsen's function which is irrational. Several second order approximations of Theodorsen's function are compared. A finite dimensional model is obtained from such an approximation. We use H infinity control techniques to find a robustly stabilizing controller for active flutter suppression.

  6. Respiratory protective device design using control system techniques

    NASA Technical Reports Server (NTRS)

    Burgess, W. A.; Yankovich, D.

    1972-01-01

    The feasibility of a control system analysis approach to provide a design base for respiratory protective devices is considered. A system design approach requires that all functions and components of the system be mathematically identified in a model of the RPD. The mathematical notations describe the operation of the components as closely as possible. The individual component mathematical descriptions are then combined to describe the complete RPD. Finally, analysis of the mathematical notation by control system theory is used to derive compensating component values that force the system to operate in a stable and predictable manner.

  7. Ultrasonic Technique for Experimental Investigation of Statistical Characteristics of Grid Generated Turbulence.

    NASA Astrophysics Data System (ADS)

    Andreeva, Tatiana; Durgin, William

    2001-11-01

    This paper focuses on ultrasonic measurements of a grid-generated turbulent flow using the travel time technique. In the present work an attempt to describe a turbulent flow by means of statistics of ultrasound wave propagation time is undertaken in combination with Kolmogorov (2/3)-power law. There are two objectives in current research work. The first one is to demonstrate an application of the travel-time ultrasonic technique for data acquisition in the grid-generated turbulence produced in a wind tunnel. The second one is to use the experimental data to verify or refute the analytically obtained expression for travel time dispersion as a function of velocity fluctuation metrics. The theoretical analysis and derivations of that formula are based on Kolmogorov theory. The series of experiment was conducted at different values of wind speeds and distances from the grid giving rise to different values of the dimensional turbulence characteristic coefficient K. Theoretical analysis, based on the experimental data reveals strong dependence of the turbulent characteristic K on the mean wind velocity. Tabulated values of the turbulent characteristic coefficient may be used for further understanding of the effect of turbulence on sound propagation.

  8. Critical Zone Experimental Design to Assess Soil Processes and Function

    NASA Astrophysics Data System (ADS)

    Banwart, Steve

    2010-05-01

    experimental design studies soil processes across the temporal evolution of the soil profile, from its formation on bare bedrock, through managed use as productive land to its degradation under longstanding pressures from intensive land use. To understand this conceptual life cycle of soil, we have selected 4 European field sites as Critical Zone Observatories. These are to provide data sets of soil parameters, processes and functions which will be incorporated into the mathematical models. The field sites are 1) the BigLink field station which is located in the chronosequence of the Damma Glacier forefield in alpine Switzerland and is established to study the initial stages of soil development on bedrock; 2) the Lysina Catchment in the Czech Republic which is representative of productive soils managed for intensive forestry, 3) the Fuchsenbigl Field Station in Austria which is an agricultural research site that is representative of productive soils managed as arable land and 4) the Koiliaris Catchment in Crete, Greece which represents degraded Mediterranean region soils, heavily impacted by centuries of intensive grazing and farming, under severe risk of desertification.

  9. Return to Our Roots: Raising Radishes To Teach Experimental Design.

    ERIC Educational Resources Information Center

    Stallings, William M.

    To provide practice in making design decisions, collecting and analyzing data, and writing and documenting results, a professor of statistics has his graduate students in statistics and research methodology classes design and perform an experiment on the effects of fertilizers on the growth of radishes. This project has been required of students…

  10. Introduction to Experimental Design: Can You Smell Fear?

    ERIC Educational Resources Information Center

    Willmott, Chris J. R.

    2011-01-01

    The ability to design appropriate experiments in order to interrogate a research question is an important skill for any scientist. The present article describes an interactive lecture-based activity centred around a comparison of two contrasting approaches to investigation of the question "Can you smell fear?" A poorly designed experiment (a video…

  11. Experimental evaluation of agricultural biomass flow sensing behaviour using capacitive technique

    NASA Astrophysics Data System (ADS)

    Khateeb, Khalid A. S. Al; Tasnim Anika, Rumana; Khan, Sheroz; Mohamud, Musse; Arshad, Atika; Hasan, Khalid; Samnam Haider, Syed; Shobaki, Mohammed M.

    2013-12-01

    To enhance industry control quality level as well as uphold enterprise economic benefit precise sensing and measurement of biomass flow is a major concern among researchers worldwide. Keeping in mind the shortcomings of existing sensing technologies this paper has developed a capacitive sensing method by making use of aop amp based bridge circuit along with particularly designed sensing electrodes. The objective of this work is fulfilled via experimental validation through a prototype hardware implementation of a flow sensing set up. The experimental results have specified the measurement system which is able to sense flow variation as a change of dielectric permittivity of different biomass materials under room condition. Moreover, the obtained results have revealed distinctive features clearly signifying the shapes and physical characteristics of electrodes, locations of the mounted electrodes on test pipe wall, dielectric permittivity and characteristics of test biomass materials.

  12. Integration of Risk Management Techniques into Outdoor Adventure Program Design.

    ERIC Educational Resources Information Center

    Bruner, Eric V.

    This paper is designed to acquaint the outdoor professional with the risk management decision making process required for the operation and management of outdoor adventure activities. The document examines the programming implications of fear in adventure activities; the risk management process in adventure programming; a definition of an…

  13. EXPERIMENTAL STUDIES ON PARTICLE IMPACTION AND BOUNCE: EFFECTS OF SUBSTRATE DESIGN AND MATERIAL. (R825270)

    EPA Science Inventory

    This paper presents an experimental investigation of the effects of impaction substrate designs and material in reducing particle bounce and reentrainment. Particle collection without coating by using combinations of different impaction substrate designs and surface materials was...

  14. New experimental technique for the measurement of the velocity field in thin films falling over obstacles

    NASA Astrophysics Data System (ADS)

    Landel, Julien R.; Daglis, Ana; McEvoy, Harry; Dalziel, Stuart B.

    2014-11-01

    We present a new experimental technique to measure the surface velocity of a thin falling film. Thin falling films are important in various processes such as cooling in heat exchangers or cleaning processes. For instance, in a household dishwasher cleaning depends on the ability of a thin draining film to remove material from a substrate. We are interested in the impact of obstacles attached to a substrate on the velocity field of a thin film flowing over them. Measuring the velocity field of thin falling films is a challenging experimental problem due to the small depth of the flow and the large velocity gradient across its depth. We propose a new technique based on PIV to measure the plane components of the velocity at the surface of the film over an arbitrarily large area and an arbitrarily large resolution, depending mostly on the image acquisition technique. We perform experiments with thin films of water flowing on a flat inclined surface, made of glass or stainless steel. The typical Reynolds number of the film is of the order of 100 to 1000, computed using the surface velocity, the film thickness and the kinematic viscosity of the film. We measure the modification to the flow field, from a viscous-gravity regime, caused by small solid obstacles, such as three-dimensional hemispherical obstacles and two-dimensional steps. We compare our results with past theoretical and numerical studies. This material is based upon work supported by the Defense Threat Reduction Agency under Contract No. HDTRA1-12-D-0003-0001.

  15. Preliminary Experimental Results on the Technique of Artificial River Replenishment to Mitigate Sediment Loss Downstream Dams

    NASA Astrophysics Data System (ADS)

    Franca, M. J.; Battisacco, E.; Schleiss, A. J.

    2014-12-01

    The transport of sediments by water throughout the river basins, from the steep slopes of the upstream regions to the sea level, is recognizable important to keep the natural conditions of rivers with a role on their ecology processes. Over the last decades, a reduction on the supply of sand and gravel has been observed downstream dams existing in several alpine rivers. Many studies highlight that the presence of a dam strongly modifies the river behavior in the downstream reach, in terms of morphology and hydrodynamics, with consequences on local ecology. Sediment deficit, bed armoring, river incision and bank instability are the main effects which affect negatively the aquatic habitats and the water quality. One of the proposed techniques to solve the problem of sediment deficit downstream dams, already adopted in few Japanese and German rivers although on an unsatisfactory fashion, is the artificial replenishment of these. Generally, it was verified that the erosion of the replenishments was not satisfactory and the transport rate was not enough to move the sediments to sufficient downstream distances. In order to improve and to provide an engineering answer to make this technique more applicable, a series of laboratory tests are ran as preparatory study to understand the hydrodynamics of the river flow when the replenishment technique is applied. Erodible volumes, with different lengths and submergence conditions, reproducing sediment replenishments volumes, are positioned along a channel bank. Different geometrical combinations of erodible sediment volumes are tested as well on the experimental flume. The first results of the experimental research, concerning erosion time evolution, the influence of discharge and the distance travelled by the eroded sediments, will be presented and discussed.

  16. Measurement of scattered radiation in a volumetric 64-slice CT scanner using three experimental techniques

    NASA Astrophysics Data System (ADS)

    Akbarzadeh, A.; Ay, M. R.; Ghadiri, H.; Sarkar, S.; Zaidi, H.

    2010-04-01

    Compton scatter poses a significant threat to volumetric x-ray computed tomography, bringing cupping and streak artefacts thus impacting qualitative and quantitative imaging procedures. To perform appropriate scatter compensation, it is necessary to estimate the magnitude and spatial distribution of x-ray scatter. The aim of this study is to compare three experimental methods for measurement of the scattered radiation profile in a 64-slice CT scanner. The explored techniques involve the use of collimator shadow, a single blocker (a lead bar that suppresses the primary radiation) and an array blocker. The latter was recently proposed and validated by our group. The collimator shadow technique was used as reference for comparison since it established itself as the most accurate experimental procedure available today. The mean relative error of measurements in all tube voltages was 3.9 ± 5.5% (with a maximum value of 20%) for the single blocker method whereas it was 1.4 ± 1.1% (with a maximum value of 5%) for the proposed blocker array method. The calculated scatter-to-primary ratio (SPR) using the blocker array method for the tube voltages of 140 kVp and 80 kVp was 0.148 and 1.034, respectively. For a larger polypropylene phantom, the maximum SPR achieved was 0.803 and 6.458 at 140 kVp and 80 kVp, respectively. Although the three compared methods present a reasonable accuracy for calculation of the scattered profile in the region corresponding to the object, the collimator shadow method is by far the most accurate empirical technique. Nevertheless, the blocker array method is relatively straightforward for scatter estimation providing minor additional radiation exposure to the patient.

  17. Investigation on experimental techniques to detect, locate and quantify gear noise in helicopter transmissions

    NASA Technical Reports Server (NTRS)

    Flanagan, P. M.; Atherton, W. J.

    1985-01-01

    A robotic system to automate the detection, location, and quantification of gear noise using acoustic intensity measurement techniques has been successfully developed. Major system components fabricated under this grant include an instrumentation robot arm, a robot digital control unit and system software. A commercial, desktop computer, spectrum analyzer and two microphone probe complete the equipment required for the Robotic Acoustic Intensity Measurement System (RAIMS). Large-scale acoustic studies of gear noise in helicopter transmissions cannot be performed accurately and reliably using presently available instrumentation and techniques. Operator safety is a major concern in certain gear noise studies due to the operating environment. The man-hours needed to document a noise field in situ is another shortcoming of present techniques. RAIMS was designed to reduce the labor and hazard in collecting data and to improve the accuracy and repeatability of characterizing the acoustic field by automating the measurement process. Using RAIMS a system operator can remotely control the instrumentation robot to scan surface areas and volumes generating acoustic intensity information using the two microphone technique. Acoustic intensity studies requiring hours of scan time can be performed automatically without operator assistance. During a scan sequence, the acoustic intensity probe is positioned by the robot and acoustic intensity data is collected, processed, and stored.

  18. Experimental Design Optimization of a Sequential Injection Method for Promazine Assay in Bulk and Pharmaceutical Formulations

    PubMed Central

    Idris, Abubakr M.; Assubaie, Fahad N.; Sultan, Salah M.

    2007-01-01

    Experimental design optimization approach was utilized to develop a sequential injection analysis (SIA) method for promazine assay in bulk and pharmaceutical formulations. The method was based on the oxidation of promazine by Ce(IV) in sulfuric acidic media resulting in a spectrophotometrically detectable species at 512 nm. A 33 full factorial design and response surface methods were applied to optimize experimental conditions potentially controlling the analysis. The optimum conditions obtained were 1.0 × 10−4 M sulphuric acid, 0.01 M Ce(IV), and 10 μL/s flow rate. Good analytical parameters were obtained including range of linearity 1–150 μg/mL, linearity with correlation coefficient 0.9997, accuracy with mean recovery 98.2%, repeatability with RSD 1.4% (n = 7 consequent injections), intermediate precision with RSD 2.1% (n = 5 runs over a week), limits of detection 0.34 μg/mL, limits of quantification 0.93 μg/mL, and sampling frequency 23 samples/h. The obtained results were realized by the British Pharmacopoeia method and comparable results were obtained. The provided SIA method enjoys the advantages of the technique with respect to rapidity, reagent/sample saving, and safety in solution handling and to the environment. PMID:18350124

  19. Experimental Guidelines for Studies Designed to Investigate the Impact of Antioxidant Supplementation on Exercise Performance

    PubMed Central

    Powers, Scott K.; Smuder, Ashley J.; Kavazis, Andreas N.; Hudson, Matthew B.

    2010-01-01

    Research interest in the effects of antioxidants on exercise-induced oxidative stress and human performance continues to grow as new scientists enter this field. Consequently, there is a need to establish an acceptable set of criteria for monitoring antioxidant capacity and oxidative damage in tissues. Numerous reports have described a wide range of assays to detect both antioxidant capacity and oxidative damage to biomolecules, but many techniques are not appropriate in all experimental conditions. Here, the authors present guidelines for selecting and interpreting methods that can be used by scientists to investigate the impact of antioxidants on both exercise performance and the redox status of tissues. Moreover, these guidelines will be useful for reviewers who are assigned the task of evaluating studies on this topic. The set of guidelines contained in this report is not designed to be a strict set of rules, because often the appropriate procedures depend on the question being addressed and the experimental model. Furthermore, because no individual assay is guaranteed to be the most appropriate in every experimental situation, the authors strongly recommend using multiple assays to verify a change in biomarkers of oxidative stress or redox balance. PMID:20190346

  20. Experimental design: computer simulation for improving the precision of an experiment.

    PubMed

    van Wilgenburg, Henk; Zillesen, Piet G van Schaick; Krulichova, Iva

    2004-06-01

    An interactive computer-assisted learning program, ExpDesign, that has been developed for simulating animal experiments, is introduced. The program guides students through the steps for designing animal experiments and estimating optimal sample sizes. Principles are introduced for controlling variation, establishing the experimental unit, selecting randomised block and factorial experimental designs, and applying the appropriate statistical analysis. Sample Power is a supporting tool that visualises the process of estimating the sample size. The aim of developing the ExpDesign program has been to make biomedical research workers more familiar with some basic principles of experimental design and statistics and to facilitate discussions with statisticians.

  1. Experimental investigation of contamination prevention techniques to cryogenic surfaces on board orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Hetrick, M. A.; Rantanen, R. O.; Ress, E. B.; Froechtenigt, J. F.

    1978-01-01

    Within the simulation limitations of on-orbit conditions, it was demonstrated that a helium purge system could be an effective method for reducing the incoming flux of contaminant species. Although a generalized purge system was employed in conjunction with basic telescope components, the simulation provided data that could be used for further modeling and design of a specific helium injection system. Experimental telescope pressures required for 90% attenuation appeared to be slightly higher (factor of 2 to 5). Cooling the helium purge gas and telescope components from 300 to 140 K had no measurable effect on stopping efficiency of a given mass flow of helium from the diffuse injector.

  2. Experimental techniques for ballistic pressure measurements and recent development in means of calibration

    NASA Astrophysics Data System (ADS)

    Elkarous, L.; Coghe, F.; Pirlot, M.; Golinval, J. C.

    2013-09-01

    This paper presents a study carried out with the commonly used experimental techniques of ballistic pressure measurement. The comparison criteria were the peak chamber pressure and its standard deviation inside specific weapon/ammunition system configurations. It is impossible to determine exactly how precise either crusher, direct or conformal transducer methods are, as there is no way to know exactly what the actual pressure is; Nevertheless, the combined use of these measuring techniques could improve accuracy. Furthermore, a particular attention has been devoted to the problem of calibration. Calibration of crusher gauges and piezoelectric transducers is paramount and an essential task for a correct determination of the pressure inside a weapon. This topic has not been completely addressed yet and still requires further investigation. In this work, state of the art calibration methods are presented together with their specific aspects. Many solutions have been developed to satisfy this demand; nevertheless current systems do not cover the whole range of needs, calling for further development effort. In this work, research being carried out for the development of suitable practical calibration methods will be presented. The behavior of copper crushers under different high strain rates by the use of the Split Hopkinson Pressure Bars (SHPB) technique is investigated in particular. The Johnson-Cook model was employed as suitable model for the numerical study using FEM code

  3. Experimental Comparison of the Hemodynamic Effects of Bifurcating Coronary Stent Implantation Techniques

    NASA Astrophysics Data System (ADS)

    Brindise, Melissa; Vlachos, Pavlos; AETheR Lab Team

    2015-11-01

    Stent implantation in coronary bifurcations imposes unique effects to the blood flow patterns and currently there is no universally accepted stent deployment approach. Despite the fact that stent-induced changes can greatly alter clinical outcomes, no concrete understanding exists regarding the hemodynamic effects of each implantation method. This work presents an experimental evaluation of the hemodynamic differences between implantation techniques. We used four common stent implantation methods including the currently preferred one-stent provisional side branch (PSB) technique and the crush (CRU), Culotte (CUL), and T-stenting (T-PR) two-stent techniques, all deployed by a cardiologist in coronary models. Particle image velocimetry was used to obtain velocity and pressure fields. Wall shear stress (WSS), oscillatory shear index, residence times, and drag and compliance metrics were evaluated and compared against an un-stented case. The results of this study demonstrate that while PSB is preferred, both it and T-PR yielded detrimental hemodynamic effects such as low WSS values. CRU provided polarizing and unbalanced results. CUL demonstrated a symmetric flow field, balanced WSS distribution, and ultimately the most favorable hemodynamic environment.

  4. Plant micro- and nanomechanics: experimental techniques for plant cell-wall analysis.

    PubMed

    Burgert, Ingo; Keplinger, Tobias

    2013-11-01

    In the last few decades, micro- and nanomechanical methods have become increasingly important analytical techniques to gain deeper insight into the nanostructure and mechanical design of plant cell walls. The objective of this article is to review the most common micro- and nanomechanical approaches that are utilized to study primary and secondary cell walls from a biomechanics perspective. In light of their quite disparate functions, the common and opposing structural features of primary and secondary cell walls are reviewed briefly. A significant part of the article is devoted to an overview of the methodological aspects of the mechanical characterization techniques with a particular focus on new developments and advancements in the field of nanomechanics. This is followed and complemented by a review of numerous studies on the mechanical role of cellulose fibrils and the various matrix components as well as the polymer interactions in the context of primary and secondary cell-wall function.

  5. An Experimental Study of Turbulent Skin Friction Reduction in Supersonic Flow Using a Microblowing Technique

    NASA Technical Reports Server (NTRS)

    Hwang, Danny P.

    1999-01-01

    A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).

  6. Leveraging the Experimental Method to Inform Solar Cell Design

    ERIC Educational Resources Information Center

    Rose, Mary Annette; Ribblett, Jason W.; Hershberger, Heather Nicole

    2010-01-01

    In this article, the underlying logic of experimentation is exemplified within the context of a photoelectrical experiment for students taking a high school engineering, technology, or chemistry class. Students assume the role of photochemists as they plan, fabricate, and experiment with a solar cell made of copper and an aqueous solution of…

  7. Design and experimental results for a compact laser printer optical system with MEMS scanning mirror

    NASA Astrophysics Data System (ADS)

    Suzuki, Takatoshi; Seki, Daisuke; Fujii, Shuichi; Mukai, Yukihiro

    2010-02-01

    There are many features expected by printer users, which include high resolution, low price, compact size, color, high speed printing and so on. Laser printers generally utilize a polygon mirror as a reflector in their optical configurations, but the usual size of the polygon mirror prevents laser scanning unit from being made much smaller. We have been conducting research on techniques which can contribute to reducing the optical unit size. Although oscillating mirror made with MEMS technology enables the system to be compact, it requires a sophisticated optical design having an increased number of constraints due to the change in angular velocity which varies depending on the orientation of the mirror, while the polygon mirror allows the scanning with constant speed. Using a small MEMS mirror is one of the critical issues concerning the reduction of cost. We have successfully resolved all the challenges listed above by using high-precision free-form optical surfaces and an optical layout making efficient use of 3D space. Our techniques can make the unit size much smaller and reduce the price. The optical path is designed to have a ray passing through a lens twice. We report both theoretical and experimental results for this system.

  8. Design Techniques for Uniform-DFT, Linear Phase Filter Banks

    NASA Technical Reports Server (NTRS)

    Sun, Honglin; DeLeon, Phillip

    1999-01-01

    Uniform-DFT filter banks are an important class of filter banks and their theory is well known. One notable characteristic is their very efficient implementation when using polyphase filters and the FFT. Separately, linear phase filter banks, i.e. filter banks in which the analysis filters have a linear phase are also an important class of filter banks and desired in many applications. Unfortunately, it has been proved that one cannot design critically-sampled, uniform-DFT, linear phase filter banks and achieve perfect reconstruction. In this paper, we present a least-squares solution to this problem and in addition prove that oversampled, uniform-DFT, linear phase filter banks (which are also useful in many applications) can be constructed for perfect reconstruction. Design examples are included illustrate the methods.

  9. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, Louis J.

    1989-01-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.

  10. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Astrophysics Data System (ADS)

    Ippolito, Louis J.

    1989-02-01

    The NASA Propagation Effects Handbook for Satellite Systems Design provides a systematic compilation of the major propagation effects experienced on space-Earth paths in the 10 to 100 GHz frequency band region. It provides both a detailed description of the propagation phenomenon and a summary of the impact of the effect on the communications system design and performance. Chapter 2 through 5 describe the propagation effects, prediction models, and available experimental data bases. In Chapter 6, design techniques and prediction methods available for evaluating propagation effects on space-Earth communication systems are presented. Chapter 7 addresses the system design process and how the effects of propagation on system design and performance should be considered and how that can be mitigated. Examples of operational and planned Ku, Ka, and EHF satellite communications systems are given.

  11. Utilizing numerical techniques in turbofan inlet acoustic suppressor design

    NASA Astrophysics Data System (ADS)

    Baumeister, K. J.

    Numerical theories in conjunction with previously published analytical results are used to augment current analytical theories in the acoustic design of a turbofan inlet nacelle. In particular, a finite element-integral theory is used to study the effect of the inlet lip radius on the far field radiation pattern and to determine the optimum impedance in an actual engine environment. For some single mode JT15D data, the numerical theory and experiment are found to be in a good agreement.

  12. Experimental design for research on shock-turbulence interaction

    NASA Technical Reports Server (NTRS)

    Radcliffe, S. W.

    1969-01-01

    Report investigates the production of acoustic waves in the interaction of a supersonic shock and a turbulence environment. The five stages of the investigation are apparatus design, development of instrumentation, preliminary experiment, turbulence generator selection, and main experiments.

  13. International Thermonuclear Experimental Reactor (ITER) neutral beam design

    SciTech Connect

    Myers, T.J.; Brook, J.W.; Spampinato, P.T.; Mueller, J.P.; Luzzi, T.E.; Sedgley, D.W. . Space Systems Div.)

    1990-10-01

    This report discusses the following topics on ITER neutral beam design: ion dump; neutralizer and module gas flow analysis; vacuum system; cryogenic system; maintainability; power distribution; and system cost.

  14. Fungal mediated silver nanoparticle synthesis using robust experimental design and its application in cotton fabric

    NASA Astrophysics Data System (ADS)

    Velhal, Sulbha Girish; Kulkarni, S. D.; Latpate, R. V.

    2016-09-01

    Among the different methods employed for the synthesis of nanoparticles, the biological method is most favorable and quite well established. In microorganisms, use of fungi in the biosynthesis of silver nanoparticles has a greater advantage over other microbial mediators. In this study, intracellular synthesis of silver nanoparticles from Aspergillus terrerus (Thom) MTCC632 was carried out. We observed that synthesis of silver nanoparticles depended on factors such as temperature, amount of biomass and concentration of silver ions in the reaction mixture. Hence, optimization of biosynthesis using these parameters was carried out using statistical tool `robust experimental design'. Size and morphology of synthesized nanoparticles were determined using X-ray diffraction technique, field emission scanning electron microscopy, energy dispersion spectroscopy, and transmission electron microscopy. Nano-embedded cotton fabric was further prepared and studied for its antibacterial properties.

  15. Integrating RFID technique to design mobile handheld inventory management system

    NASA Astrophysics Data System (ADS)

    Huang, Yo-Ping; Yen, Wei; Chen, Shih-Chung

    2008-04-01

    An RFID-based mobile handheld inventory management system is proposed in this paper. Differing from the manual inventory management method, the proposed system works on the personal digital assistant (PDA) with an RFID reader. The system identifies electronic tags on the properties and checks the property information in the back-end database server through a ubiquitous wireless network. The system also provides a set of functions to manage the back-end inventory database and assigns different levels of access privilege according to various user categories. In the back-end database server, to prevent improper or illegal accesses, the server not only stores the inventory database and user privilege information, but also keeps track of the user activities in the server including the login and logout time and location, the records of database accessing, and every modification of the tables. Some experimental results are presented to verify the applicability of the integrated RFID-based mobile handheld inventory management system.

  16. Using a hybrid approach to optimize experimental network design for aquifer parameter identification.

    PubMed

    Chang, Liang-Cheng; Chu, Hone-Jay; Lin, Yu-Pin; Chen, Yu-Wen

    2010-10-01

    This research develops an optimum design model of groundwater network using genetic algorithm (GA) and modified Newton approach, based on the experimental design conception. The goal of experiment design is to minimize parameter uncertainty, represented by the covariance matrix determinant of estimated parameters. The design problem is constrained by a specified cost and solved by GA and a parameter identification model. The latter estimates optimum parameter value and its associated sensitivity matrices. The general problem is simplified into two classes of network design problems: an observation network design problem and a pumping network design problem. Results explore the relationship between the experimental design and the physical processes. The proposed model provides an alternative to solve optimization problems for groundwater experimental design. PMID:19757116

  17. Cluster LEDs mixing optimization by lens design techniques.

    PubMed

    Chien, Ming-Chin; Tien, Chung-Hao

    2011-07-01

    This paper presents a methodology analogous to a general lens design rule to optimize step-by-step the spectral power distribution of a white-light LED cluster with the highest possible color rendering and efficiency in a defined range of color temperatures. By examining a platform composed of four single-color LEDs and a phosphor-converted cool-white (CW) LED, we successfully validate the proposed algorithm and suggest the optimal operation range (correlated color temperature = 2600-8500 K) accompanied by a high color quality scale (CQS > 80 points) as well as high luminous efficiency (97% of cluster's theoretical maximum value).

  18. Wood lens design philosophy based on a binary additive manufacturing technique

    NASA Astrophysics Data System (ADS)

    Marasco, Peter L.; Bailey, Christopher

    2016-04-01

    Using additive manufacturing techniques in optical engineering to construct a gradient index (GRIN) optic may overcome a number of limitations of GRIN technology. Such techniques are maturing quickly, yielding additional design degrees of freedom for the engineer. How best to employ these degrees of freedom is not completely clear at this time. This paper describes a preliminary design philosophy, including assumptions, pertaining to a particular printing technique for GRIN optics. It includes an analysis based on simulation and initial component measurement.

  19. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    DOE PAGES

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psimore » angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.« less

  20. Designing Free Energy Surfaces That Match Experimental Data with Metadynamics

    SciTech Connect

    White, Andrew D.; Dama, James F.; Voth, Gregory A.

    2015-04-30

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. Previously we introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. We also introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. Finally, the example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  1. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model.

  2. Designing free energy surfaces that match experimental data with metadynamics.

    PubMed

    White, Andrew D; Dama, James F; Voth, Gregory A

    2015-06-01

    Creating models that are consistent with experimental data is essential in molecular modeling. This is often done by iteratively tuning the molecular force field of a simulation to match experimental data. An alternative method is to bias a simulation, leading to a hybrid model composed of the original force field and biasing terms. We previously introduced such a method called experiment directed simulation (EDS). EDS minimally biases simulations to match average values. In this work, we introduce a new method called experiment directed metadynamics (EDM) that creates minimal biases for matching entire free energy surfaces such as radial distribution functions and phi/psi angle free energies. It is also possible with EDM to create a tunable mixture of the experimental data and free energy of the unbiased ensemble with explicit ratios. EDM can be proven to be convergent, and we also present proof, via a maximum entropy argument, that the final bias is minimal and unique. Examples of its use are given in the construction of ensembles that follow a desired free energy. The example systems studied include a Lennard-Jones fluid made to match a radial distribution function, an atomistic model augmented with bioinformatics data, and a three-component electrolyte solution where ab initio simulation data is used to improve a classical empirical model. PMID:26575545

  3. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers.

    PubMed

    Eriksson, Tobias J R; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio ( SNR ) ≃ 15 dB in transmit-receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  4. Experimental Evaluation of Three Designs of Electrodynamic Flexural Transducers

    PubMed Central

    Eriksson, Tobias J. R.; Laws, Michael; Kang, Lei; Fan, Yichao; Ramadas, Sivaram N.; Dixon, Steve

    2016-01-01

    Three designs for electrodynamic flexural transducers (EDFT) for air-coupled ultrasonics are presented and compared. An all-metal housing was used for robustness, which makes the designs more suitable for industrial applications. The housing is designed such that there is a thin metal plate at the front, with a fundamental flexural vibration mode at ∼50 kHz. By using a flexural resonance mode, good coupling to the load medium was achieved without the use of matching layers. The front radiating plate is actuated electrodynamically by a spiral coil inside the transducer, which produces an induced magnetic field when an AC current is applied to it. The transducers operate without the use of piezoelectric materials, which can simplify manufacturing and prolong the lifetime of the transducers, as well as open up possibilities for high-temperature applications. The results show that different designs perform best for the generation and reception of ultrasound. All three designs produced large acoustic pressure outputs, with a recorded sound pressure level (SPL) above 120 dB at a 40 cm distance from the highest output transducer. The sensitivity of the transducers was low, however, with single shot signal-to-noise ratio (SNR)≃15 dB in transmit–receive mode, with transmitter and receiver 40 cm apart. PMID:27571075

  5. "Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"

    ERIC Educational Resources Information Center

    Konstantopoulos, Spyros

    2009-01-01

    Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…

  6. Comparison of visibility measurement techniques for forklift truck design factors.

    PubMed

    Choi, Chin-Bong; Park, Peom; Kim, Young-Ho; Susan Hallbeck, M; Jung, Myung-Chul

    2009-03-01

    This study applied the light bulb shadow test, a manikin vision assessment test, and an individual test to a forklift truck to identify forklift truck design factors influencing visibility. The light bulb shadow test followed the standard of ISO/DIS 13564-1 for traveling and maneuvering tests with four test paths (Test Nos. 1, 3, 4, and 6). Digital human and forklift truck models were developed for the manikin vision assessment test with CATIA V5R13 human modeling solutions. Six participants performed the individual tests. Both employed similar parameters to the light bulb shadow test. The individual test had better visibility with fewer numbers and a greater distribution of the shadowed grids than the other two tests due to eye movement and anthropometric differences. The design factors of load backrest extension, lift chain, hose, dashboard, and steering wheel should be the first factors considered to improve visibility, especially when a forklift truck mainly performs a forward traveling task in an open area.

  7. Design and experimental characterization of a bandpass sampling receiver

    NASA Astrophysics Data System (ADS)

    Singh, Avantika; Kumar, Devika S.; Venkateswaran, Gomathy; Manjukrishna, S.; Singh, Amrendra Kumar; Kurup, Dhanesh G.

    2016-03-01

    In this paper, we present a robust and efficient approach for deigning reconfigurable Radio receivers based on Bandpass sampling. The direct sampled RF frontend is followed by signal processing blocks implemented on an FPGA and consists of a PLL based on second order Costas technique and a Kaiser windowing based lowpass filtering. The proposed method can be used for implementing a cost effective multi-channel receiver for data, audio, video etc. over various channels.

  8. Design and experimental validation of a compact collimated Knudsen source.

    PubMed

    Wouters, Steinar H W; Ten Haaf, Gijs; Mutsaers, Peter H A; Vredenbregt, Edgar J D

    2016-08-01

    In this paper, the design and performance of a collimated Knudsen source, which has the benefit of a simple design over recirculating sources, is discussed. Measurements of the flux, transverse velocity distribution, and brightness of the resulting rubidium beam at different source temperatures were conducted to evaluate the performance. The scaling of the flux and brightness with the source temperature follows the theoretical predictions. The transverse velocity distribution in the transparent operation regime also agrees with the simulated data. The source was tested up to a temperature of 433 K and was able to produce a flux in excess of 10(13) s(-1). PMID:27587111

  9. Optimization of preservatives in a topical formulation using experimental design.

    PubMed

    Rahali, Y; Pensé-Lhéritier, A-M; Mielcarek, C; Bensouda, Y

    2009-12-01

    Optimizing the preservative regime for a preparation requires the antimicrobial effectiveness of several preservative combinations to be determined. In this study, three preservatives were tested: benzoic acid, sorbic acid and benzylic alcohol. Their preservative effects were evaluated using the antimicrobial preservative efficacy test (challenge-test) of the European Pharmacopeia (EP). A D-optimal mixture design was used to provide a maximum of information from a limited number of experiments. The results of this study were analysed with the help of the Design Expert software and enabled us to formulate emulsions satisfying both requirements A and B of the EP.

  10. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The paper summarizes the results obtained in an exploratory evaluation of ceramics for automobile thermal reactors. Candidate ceramic materials were evaluated in several reactor designs using both engine dynamometer and vehicle road tests. Silicon carbide contained in a corrugated metal support structure exhibited the best performance, lasting 1100 hours in engine dynamometer tests and for more than 38,600 kilimeters (24,000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  11. Design and evaluation of experimental ceramic automobile thermal reactors

    NASA Technical Reports Server (NTRS)

    Stone, P. L.; Blankenship, C. P.

    1974-01-01

    The results obtained in an exploratory evaluation of ceramics for automobile thermal reactors are summarized. Candidate ceramic materials were evaluated in several reactor designs by using both engine-dynamometer and vehicle road tests. Silicon carbide contained in a corrugated-metal support structure exhibited the best performance, lasting 1100 hr in engine-dynamometer tests and more than 38,600 km (24000 miles) in vehicle road tests. Although reactors containing glass-ceramic components did not perform as well as those containing silicon carbide, the glass-ceramics still offer good potential for reactor use with improved reactor designs.

  12. Landing on Enceladus: Mission Design Parameters and Techniques

    NASA Astrophysics Data System (ADS)

    Spilker, T. R.

    2006-12-01

    Since Cassini/Huygens mission results revealed the intriguing nature of Enceladus, scientists have discussed various ways to obtain more detailed information about the south-polar geysers and subsurface conditions that produce them. This includes potential science instruments and investigations, and also the kinds of spacecraft platforms that could deliver and support the instruments. The three most commonly discussed platforms are Saturn orbiters that perform multiple close Enceladus flybys, Enceladus orbiters, and landers (soft or hard). Some high-value science investigations, such as producing an accurate description of the gravity field to infer internal structure, are best done from an orbiter. Some, such as seismic investigations, can be done only with a landed package. Unlike larger satellites such as Europa and Ganymede, Enceladus's low mass yields low surface gravity (~0.11 m/s2), low orbital speeds (<200 m/s), and other mission design characteristics that make it a manageable destination for a practical, high-value lander mission. The main mission design challenge is deceleration from Enceladus approach to a direct landing approach or orbit insertion. A Hohmann transfer from Titan approaches Enceladus with a V- infinity of >4 km/s, most of which would have to be decelerated away propulsively - a sizeable, multi-stage task for current propulsion systems - if no gravity-assist pump-down is used. Preliminary conclusions from JPL mission designers suggest that a pump-down tour could reduce that V-infinity to 2 km/s or less, possibly as little as 1 km/s if a lengthy pump-down is tolerable (Strange, Russell, and Lam, 2006). Once in orbit, landing from a moderately stable, 100-km circular orbit can be accomplished with as little as 210 m/s delta-V, a relatively simple task for a simple propulsion system. Temporary use of marginally stable orbits could reduce that figure. Low surface gravity allows use of small, light thrusters and provides ample reaction time

  13. Experimental observation of silver and gold penetration into dental ceramic by means of a radiotracer technique

    SciTech Connect

    Moya, F.; Payan, J.; Bernardini, J.; Moya, E.G.

    1987-12-01

    A radiotracer technique was used to study silver and gold diffusion into dental porcelain under experimental conditions close to the real conditions in prosthetic laboratories for porcelain bakes. It was clearly shown that these non-oxidizable elements were able to diffuse into the ceramic as well as oxidizable ones. The penetration depth varied widely according to the element. The ratio DAg/DAu was about 10(3) around 850 degrees C. In contrast to gold, the silver diffusion rate was high enough to allow silver, from the metallic alloy, to be present at the external ceramic surface after diffusion into the ceramic. Hence, the greening of dental porcelains baked on silver-rich alloys could be explained mainly by a solid-state diffusion mechanism.

  14. Some human factors issues in enhanced vision system: an experimental approach through stimulation techniques

    NASA Astrophysics Data System (ADS)

    Leger, Alain; Fleury, Lionel; Aymeric, Bruno

    1996-05-01

    Among the numerous human factors issues related to Enhanced Vision Systems the decision making process appears quite critical for certification purpose. An experimental setup based on a simplified aircraft environment including a HUD was developed in the framework of the FANSTIC II program. A stimulation technique based on recordings of IR sensors obtained during weather penetration test flight were used to study visual cues involved in decision process during approaches on IR imagery. A study of visual scanning strategies was also conducted in order to follow dynamically the process. Results show a good consistency in the pattern of visual cues used by different pilots in making their decision. Decision delays were found to be in the region of 5 - 6 seconds with little difference between FLIR and visible images. The introduction of symbology superimposed on the imagery sensibly modify visual scanning patterns. In this case, scanning is deeply influenced by pilot's previous experience.

  15. Design of motorcycle rider protection systems using numerical techniques.

    PubMed

    Miralbes, R

    2013-10-01

    The goal of this paper is the development of a design methodology, based on the use of finite elements numerical tools and dummies in order to study the damages and injuries that appear during a motorcyclist collision against a motorcyclist protection system (MPS). According to the existing regulation, a Hybrid III dummy FEM model has been used as a starting point and some modifications have been included. For instance a new finite element helmet model has been developed and later added to the dummy model. Moreover, some structural elements affecting the simulation results such as the connecting bolts or the ground have been adequately modeled. Finally there have been analyzed diverse types of current motorcyclists protection systems, for which it has been made a comparative numerical-experiment analysis to validate the numerical results and the methodology used.

  16. Design of motorcycle rider protection systems using numerical techniques.

    PubMed

    Miralbes, R

    2013-10-01

    The goal of this paper is the development of a design methodology, based on the use of finite elements numerical tools and dummies in order to study the damages and injuries that appear during a motorcyclist collision against a motorcyclist protection system (MPS). According to the existing regulation, a Hybrid III dummy FEM model has been used as a starting point and some modifications have been included. For instance a new finite element helmet model has been developed and later added to the dummy model. Moreover, some structural elements affecting the simulation results such as the connecting bolts or the ground have been adequately modeled. Finally there have been analyzed diverse types of current motorcyclists protection systems, for which it has been made a comparative numerical-experiment analysis to validate the numerical results and the methodology used. PMID:23792610

  17. Design on intelligent gateway technique in home network

    NASA Astrophysics Data System (ADS)

    Hu, Zhonggong; Feng, Xiancheng

    2008-12-01

    Based on digitization, multimedia, mobility, wide band, real-time interaction and so on,family networks, because can provide diverse and personalized synthesis service in information, correspondence work, entertainment, education and health care and so on, are more and more paid attention by the market. The family network product development has become the focus of the related industry. In this paper,the concept of the family network and the overall reference model of the family network are introduced firstly.Then the core techniques and the correspondence standard related with the family network are proposed.The key analysis is made for the function of family gateway, the function module of the software,the key technologies to client side software architecture and the trend of development of the family network entertainment seeing and hearing service and so on. Product present situation of the family gateway and the future trend of development, application solution of the digital family service are introduced. The development of the family network product bringing about the digital family network industry is introduced finally.It causes the development of software industries,such as communication industry,electrical appliances industry, computer and game and so on.It also causes the development of estate industry.

  18. Development of experimental verification techniques for non-linear deformation and fracture.

    SciTech Connect

    Moody, Neville Reid; Bahr, David F.

    2003-12-01

    This project covers three distinct features of thin film fracture and deformation in which the current experimental technique of nanoindentation demonstrates limitations. The first feature is film fracture, which can be generated either by nanoindentation or bulge testing thin films. Examples of both tests will be shown, in particular oxide films on metallic or semiconductor substrates. Nanoindentations were made into oxide films on aluminum and titanium substrates for two cases; one where the metal was a bulk (effectively single crystal) material and the other where the metal was a 1 pm thick film grown on a silica or silicon substrate. In both cases indentation was used to produce discontinuous loading curves, which indicate film fracture after plastic deformation of the metal. The oxides on bulk metals fractures occurred at reproducible loads, and the tensile stress in the films at fracture were approximately 10 and 15 GPa for the aluminum and titanium oxides respectively. Similarly, bulge tests of piezoelectric oxide films have been carried out and demonstrate film fracture at stresses of only 100's of MPa, suggesting the importance of defects and film thickness in evaluating film strength. The second feature of concern is film adhesion. Several qualitative and quantitative tests exist today that measure the adhesion properties of thin films. A relatively new technique that uses stressed overlayers to measure adhesion has been proposed and extensively studied. Delamination of thin films manifests itself in the form of either telephone cord or straight buckles. The buckles are used to calculate the interfacial fracture toughness of the film-substrate system. Nanoindentation can be utilized if more energy is needed to initiate buckling of the film system. Finally, deformation in metallic systems can lead to non-linear deformation due to 'bursts' of dislocation activity during nanoindentation. An experimental study to examine the structure of dislocations around

  19. The Inquiry Flame: Scaffolding for Scientific Inquiry through Experimental Design

    ERIC Educational Resources Information Center

    Pardo, Richard; Parker, Jennifer

    2010-01-01

    In the lesson presented in this article, students learn to organize their thinking and design their own inquiry experiments through careful observation of an object, situation, or event. They then conduct these experiments and report their findings in a lab report, poster, trifold board, slide, or video that follows the typical format of the…

  20. Optimizing Experimental Designs Relative to Costs and Effect Sizes.

    ERIC Educational Resources Information Center

    Headrick, Todd C.; Zumbo, Bruno D.

    A general model is derived for the purpose of efficiently allocating integral numbers of units in multi-level designs given prespecified power levels. The derivation of the model is based on a constrained optimization problem that maximizes a general form of a ratio of expected mean squares subject to a budget constraint. This model provides more…

  1. Acting Like a Physicist: Student Approach Study to Experimental Design

    ERIC Educational Resources Information Center

    Karelina, Anna; Etkina, Eugenia

    2007-01-01

    National studies of science education have unanimously concluded that preparing our students for the demands of the 21st century workplace is one of the major goals. This paper describes a study of student activities in introductory college physics labs, which were designed to help students acquire abilities that are valuable in the workplace. In…

  2. Creativity in Advertising Design Education: An Experimental Study

    ERIC Educational Resources Information Center

    Cheung, Ming

    2011-01-01

    Have you ever thought about why qualities whose definitions are elusive, such as those of a sunset or a half-opened rose, affect us so powerfully? According to de Saussure (Course in general linguistics, 1983), the making of meanings is closely related to the production and interpretation of signs. All types of design, including advertising…

  3. Optimization and evaluation of clarithromycin floating tablets using experimental mixture design.

    PubMed

    Uğurlu, Timucin; Karaçiçek, Uğur; Rayaman, Erkan

    2014-01-01

    The purpose of the study was to prepare and evaluate clarithromycin (CLA) floating tablets using experimental mixture design for treatment of Helicobacter pylori provided by prolonged gastric residence time and controlled plasma level. Ten different formulations were generated based on different molecular weight of hypromellose (HPMC K100, K4M, K15M) by using simplex lattice design (a sub-class of mixture design) with Minitab 16 software. Sodium bicarbonate and anhydrous citric acid were used as gas generating agents. Tablets were prepared by wet granulation technique. All of the process variables were fixed. Results of cumulative drug release at 8th h (CDR 8th) were statistically analyzed to get optimized formulation (OF). Optimized formulation, which gave floating lag time lower than 15 s and total floating time more than 10 h, was analyzed and compared with target for CDR 8th (80%). A good agreement was shown between predicted and actual values of CDR 8th with a variation lower than 1%. The activity of clarithromycin contained optimizedformula against H. pylori were quantified using well diffusion agar assay. Diameters of inhibition zones vs. log10 clarithromycin concentrations were plotted in order to obtain a standard curve and clarithromycin activity. PMID:25272652

  4. On the construction of experimental designs for a given task by jointly optimizing several quality criteria: Pareto-optimal experimental designs.

    PubMed

    Sánchez, M S; Sarabia, L A; Ortiz, M C

    2012-11-19

    Experimental designs for a given task should be selected on the base of the problem being solved and of some criteria that measure their quality. There are several such criteria because there are several aspects to be taken into account when making a choice. The most used criteria are probably the so-called alphabetical optimality criteria (for example, the A-, E-, and D-criteria related to the joint estimation of the coefficients, or the I- and G-criteria related to the prediction variance). Selecting a proper design to solve a problem implies finding a balance among these several criteria that measure the performance of the design in different aspects. Technically this is a problem of multi-criteria optimization, which can be tackled from different views. The approach presented here addresses the problem in its real vector nature, so that ad hoc experimental designs are generated with an algorithm based on evolutionary algorithms to find the Pareto-optimal front. There is not theoretical limit to the number of criteria that can be studied and, contrary to other approaches, no just one experimental design is computed but a set of experimental designs all of them with the property of being Pareto-optimal in the criteria needed by the user. Besides, the use of an evolutionary algorithm makes it possible to search in both continuous and discrete domains and avoid the need of having a set of candidate points, usual in exchange algorithms.

  5. New experimental technique for detecting the effect of low-frequency electric fields on enzyme structure.

    PubMed

    Greco, G; Gianfreda, L; d'Ambrosio, G; Massa, R; Scaglione, A; Scarfi, M R

    1990-01-01

    A new experimental approach has been developed to determine kinetic and thermodynamic parameters of the inactivation of an enzyme under labile conditions both with and without exposure to electrical currents as sources of perturbation. Studies were undertaken to investigate if low-frequency electric currents can accelerate the thermal inactivation of an enzyme through interactions with dipole moments in enzymatic molecules and through related mechanical stresses. The experiments were conducted with the enzyme acid phosphatase. The enzyme was exposed to a 50-Hz current at different densities (10 to 60 mA/cm2 rms) or to a sinusoidal or square-wave current at an average density of 3 mA/cm2 and frequencies from, respectively, 50 Hz to 20 kHz and 500 pulses per second (pps) to 50,000 pps. Positive-control experiments were performed in the presence of a stabilizer or a deactivator. The results indicate that the technique is sensitive to conformational changes that otherwise may be impossible to detect. However, exposure to electric currents under the experimental conditions described herein showed no effects of the currents.

  6. Combustion behavior of single coal-water slurry droplets, Part 1: Experimental techniques

    SciTech Connect

    Levendis, Y.A.; Metghalchi, M.; Wise, D.

    1991-12-31

    Techniques to produce single droplets of coal-water slurries have been developed in order to study the combustion behavior of the slurries. All stages of slurry combustion are of interest to the present study, however, emphasis will be given to the combustion of the solid agglomerate char which remains upon the termination of the water evaporation and the devolatilization periods. An experimental facility is under construction where combustion of coal-water slurries will be monitored in a variety of furnace temperatures and oxidizing atmospheres. The effect of the initial size of the slurry droplet and the solids loading (coal to water ratio) will be investigated. A drop tube, laminar flow furnace coupled to a near-infrared, ratio pyrometer win be used to monitor temperature-time histories of single particles from ignition to extinction. This paper describes the experimental built-up to this date and presents results obtained by numerical analysis that help understanding the convective and radiating environment in the furnace.

  7. Development of experimental techniques to study protein and nucleic acid structures

    SciTech Connect

    Trewhella, J.; Bradbury, E.M.; Gupta, G.; Imai, B.; Martinez, R.; Unkefer, C.

    1996-04-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This research project sought to develop experimental tools for structural biology, specifically those applicable to three-dimensional, biomolecular-structure analysis. Most biological systems function in solution environments, and the ability to study proteins and polynucleotides under physiologically relevant conditions is of paramount importance. The authors have therefore adopted a three-pronged approach which involves crystallographic and nuclear magnetic resonance (NMR) spectroscopic methods to study protein and DNA structures at high (atomic) resolution as well as neutron and x-ray scattering techniques to study the complexes they form in solution. Both the NMR and neutron methods benefit from isotope labeling strategies, and all provide experimental data that benefit from the computational and theoretical tools being developed. The authors have focused on studies of protein-nucleic acid complexes and DNA hairpin structures important for understanding the regulation of gene expression, as well as the fundamental interactions that allow these complexes to form.

  8. Experimental research on radius of curvature measurement of spherical lenses based on laser differential confocal technique

    NASA Astrophysics Data System (ADS)

    Ding, Xiang; Sun, Ruoduan; Li, Fei; Zhao, Weiqian; Liu, Wenli

    2011-11-01

    A new approach based on laser differential confocal technique is potential to achieve high accuracy in radius of curvature (ROC) measurement. It utilizes two digital microscopes with virtual pinholes on the CCD detectors to precisely locate the cat's-eye and the confocal positions, which can enhance the focus-identification resolution. An instrumental system was established and experimental research was carried out to determine how error sources contribute to the uncertainty of ROC measurement, such as optical axis misalignment, dead path of the interferometer, surface figure error of tested lenses and temperature fluctuation, etc. Suggestions were also proposed on how these factors could be avoided or suppressed. The system performance was tested by employing four pairs of template lenses with a serial of ROC values. The relative expanded uncertainty was analyzed and calculated based on theoretical analysis and experimental determination, which was smaller than 2x10-5 (k=2). The results were supported by comparison measurement between the differential confocal radius measurement (DCRM) system and an ultra-high accuracy three-dimensional profilometer, showing good consistency. It demonstrated that the DCRM system was capable of high-accuracy ROC measurement.

  9. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    SciTech Connect

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the development of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.

  10. Tocorime Apicu: design and validation of an experimental search engine

    NASA Astrophysics Data System (ADS)

    Walker, Reginald L.

    2001-07-01

    In the development of an integrated, experimental search engine, Tocorime Apicu, the incorporation and emulation of the evolutionary aspects of the chosen biological model (honeybees) and the field of high-performance knowledge discovery in databases results in the coupling of diverse fields of research: evolutionary computations, biological modeling, machine learning, statistical methods, information retrieval systems, active networks, and data visualization. The use of computer systems provides inherent sources of self-similarity traffic that result from the interaction of file transmission, caching mechanisms, and user-related processes. These user-related processes are initiated by the user, application programs, or the operating system (OS) for the user's benefit. The effect of Web transmission patterns, coupled with these inherent sources of self-similarity associated with the above file system characteristics, provide an environment for studying network traffic. The goal of the study was client-based, but with no user interaction. New methodologies and approaches were needed as network packet traffic increased in the LAN, LAN+WAN, and WAN. Statistical tools and methods for analyzing datasets were used to organize data captured at the packet level for network traffic between individual source/destination pairs. Emulation of the evolutionary aspects of the biological model equips the experimental search engine with an adaptive system model which will eventually have the capability to evolve with an ever- changing World Wide Web environment. The results were generated using a LINUX OS.

  11. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  12. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  13. Experimental Evaluation of Quantitative Diagnosis Technique for Hepatic Fibrosis Using Ultrasonic Phantom

    NASA Astrophysics Data System (ADS)

    Koriyama, Atsushi; Yasuhara, Wataru; Hachiya, Hiroyuki

    2012-07-01

    Since clinical diagnosis using ultrasonic B-mode images depends on the skill of the doctor, the realization of a quantitative diagnosis method using an ultrasound echo signal is highly required. We have been investigating a quantitative diagnosis technique, mainly for hepatic disease. In this paper, we present the basic experimental evaluation results on the accuracy of the proposed quantitative diagnosis technique for hepatic fibrosis by using a simple ultrasonic phantom. As a region of interest crossed on the boundary between two scatterer areas with different densities in a phantom, we can simulate the change of the echo amplitude distribution from normal tissue to fibrotic tissue in liver disease. The probability density function is well approximated by our fibrosis distribution model that is a mixture of normal and fibrotic tissue. The fibrosis parameters of the amplitude distribution model can be estimated relatively well at a mixture rate from 0.2 to 0.6. In the inversion processing, the standard deviation of the estimated fibrosis results at mixture ratios of less than 0.2 and larger than 0.6 are relatively large. Although the probability density is not large at high amplitude, the estimated variance ratio and mixture rate of the model are strongly affected by higher amplitude data.

  14. Facilitating Preemptive Hardware System Design Using Partial Reconfiguration Techniques

    PubMed Central

    Rincon, Fernando; Vaderrama, Carlos; Villanueva, Felix; Caba, Julian; Lopez, Juan Carlos

    2014-01-01

    In FPGA-based control system design, partial reconfiguration is especially well suited to implement preemptive systems. In real-time systems, the deadline for critical task can compel the preemption of noncritical one. Besides, an asynchronous event can demand immediate attention and, then, force launching a reconfiguration process for high-priority task implementation. If the asynchronous event is previously scheduled, an explicit activation of the reconfiguration process is performed. If the event cannot be previously programmed, such as in dynamically scheduled systems, an implicit activation to the reconfiguration process is demanded. This paper provides a hardware-based approach to explicit and implicit activation of the partial reconfiguration process in dynamically reconfigurable SoCs and includes all the necessary tasks to cope with this issue. Furthermore, the reconfiguration service introduced in this work allows remote invocation of the reconfiguration process and then the remote integration of off-chip components. A model that offers component location transparency is also presented to enhance and facilitate system integration. PMID:24672292

  15. MSE spectrograph optical design: a novel pupil slicing technique

    NASA Astrophysics Data System (ADS)

    Spanò, P.

    2014-07-01

    The Maunakea Spectroscopic Explorer shall be mainly devoted to perform deep, wide-field, spectroscopic surveys at spectral resolutions from ~2000 to ~20000, at visible and near-infrared wavelengths. Simultaneous spectral coverage at low resolution is required, while at high resolution only selected windows can be covered. Moreover, very high multiplexing (3200 objects) must be obtained at low resolution. At higher resolutions a decreased number of objects (~800) can be observed. To meet such high demanding requirements, a fiber-fed multi-object spectrograph concept has been designed by pupil-slicing the collimated beam, followed by multiple dispersive and camera optics. Different resolution modes are obtained by introducing anamorphic lenslets in front of the fiber arrays. The spectrograph is able to switch between three resolution modes (2000, 6500, 20000) by removing the anamorphic lenses and exchanging gratings. Camera lenses are fixed in place to increase stability. To enhance throughput, VPH first-order gratings has been preferred over echelle gratings. Moreover, throughput is kept high over all wavelength ranges by splitting light into more arms by dichroic beamsplitters and optimizing efficiency for each channel by proper selection of glass materials, coatings, and grating parameters.

  16. Time domain and frequency domain design techniques for model reference adaptive control systems

    NASA Technical Reports Server (NTRS)

    Boland, J. S., III

    1971-01-01

    Some problems associated with the design of model-reference adaptive control systems are considered and solutions to these problems are advanced. The stability of the adapted system is a primary consideration in the development of both the time-domain and the frequency-domain design techniques. Consequentially, the use of Liapunov's direct method forms an integral part of the derivation of the design procedures. The application of sensitivity coefficients to the design of model-reference adaptive control systems is considered. An application of the design techniques is also presented.

  17. Improved Experimental Techniques for Analyzing Nucleic Acid Transport Through Protein Nanopores in Planar Lipid Bilayers

    NASA Astrophysics Data System (ADS)

    Costa, Justin A.

    The translocation of nucleic acid polymers across cell membranes is a fundamental requirement for complex life and has greatly contributed to genomic molecular evolution. The diversity of pathways that have evolved to transport DNA and RNA across membranes include protein receptors, active and passive transporters, endocytic and pinocytic processes, and various types of nucleic acid conducting channels known as nanopores. We have developed a series of experimental techniques, collectively known as "Wicking", that greatly improves the biophysical analysis of nucleic acid transport through protein nanopores in planar lipid bilayers. We have verified the Wicking method using numerous types of classical ion channels including the well-studied chloride selective channel, CLIC1. We used the Wicking technique to reconstitute α-hemolysin and found that DNA translocation events of types A and B could be routinely observed using this method. Furthermore, measurable differences were observed in the duration of blockade events as DNA length and composition was varied, consistent with previous reports. Finally, we tested the ability of the Wicking technology to reconstitute the dsRNA transporter Sid-1. Exposure to dsRNAs of increasing length and complexity showed measurable differences in the current transitions suggesting that the charge carrier was dsRNA. However, the translocation events occurred so infrequently that a meaningful electrophysiological analysis was not possible. Alterations in the lipid composition of the bilayer had a minor effect on the frequency of translocation events but not to such a degree as to permit rigorous statistical analysis. We conclude that in many instances the Wicking method is a significant improvement to the lipid bilayer technique, but is not an optimal method for analyzing transport through Sid-1. Further refinements to the Wicking method might have future applications in high throughput DNA sequencing, DNA computation, and

  18. Experimental Techniques for Evaluating the Effects of Aging on Impact and High Strain Rate Properties of Triaxial Braided Composite Materials

    NASA Technical Reports Server (NTRS)

    Pereira, J. Michael; Roberts, Gary D.; Ruggeri, Charles R.; Gilat, Amos; Matrka, Thomas

    2010-01-01

    An experimental program is underway to measure the impact and high strain rate properties of triaxial braided composite materials and to quantify any degradation in properties as a result of thermal and hygroscopic aging typically encountered during service. Impact tests are being conducted on flat panels using a projectile designed to induce high rate deformation similar to that experienced in a jet engine fan case during a fan blade-out event. The tests are being conducted on as-fabricated panels and panels subjected to various numbers of aging cycles. High strain rate properties are being measured using a unique Hopkinson bar apparatus that has a larger diameter than conventional Hopkinson bars. This larger diameter is needed to measure representative material properties because of the large unit cell size of the materials examined in this work. In this paper the experimental techniques used for impact and high strain rate testing are described and some preliminary results are presented for both as-fabricated and aged composites.

  19. A validated spectrofluorimetric method for the determination of nifuroxazide through coumarin formation using experimental design

    PubMed Central

    2013-01-01

    Background Nifuroxazide (NF) is an oral nitrofuran antibiotic, having a wide range of bactericidal activity against gram positive and gram negative enteropathogenic organisms. It is formulated either in single form, as intestinal antiseptic or in combination with drotaverine (DV) for the treatment of gastroenteritis accompanied with gastrointestinal spasm. Spectrofluorimetry is a convenient and sensitive technique for pharmaceutical quality control. The new proposed spectrofluorimetric method allows its determination either in single form or in binary mixture with DV. Furthermore, experimental conditions were optimized using the new approach: Experimental design, which has many advantages over the old one, one variable at a time (OVAT approach). Results A novel and sensitive spectrofluorimetric method was designed and validated for the determination of NF in pharmaceutical formulation. The method was based upon the formation of a highly fluorescent coumarin compound by the reaction between NF and ethylacetoacetate (EAA) using sulfuric acid as catalyst. The fluorescence was measured at 390 nm upon excitation at 340 nm. Experimental design was used to optimize experimental conditions. Volumes of EAA and sulfuric acid, temperature and heating time were considered the critical factors to be studied in order to establish an optimum fluorescence. Each two factors were co-tried at three levels. Regression analysis revealed good correlation between fluorescence intensity and concentration over the range 20–400 ng ml-1. The suggested method was successfully applied for the determination of NF in pure and capsule forms. The procedure was validated in terms of linearity, accuracy, precision, limit of detection and limit of quantification. The selectivity of the method was investigated by analysis of NF in presence of the co-mixed drug DV where no interference was observed. The reaction pathway was suggested and the structure of the fluorescent product was proposed

  20. Design Techniques for Power-Aware Combinational Logic SER Mitigation

    NASA Astrophysics Data System (ADS)

    Mahatme, Nihaar N.

    SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most

  1. Experimental characterisation of a novel viscoelastic rectifier design

    PubMed Central

    Ejlebjerg Jensen, Kristian; Szabo, Peter; Okkels, Fridolin; Alves, M. A.

    2012-01-01

    A planar microfluidic system with contractions and obstacles is characterized in terms of anisotropic flow resistance due to viscoelastic effects. The working mechanism is illustrated using streak photography, while the diodicity performance is quantified by pressure drop measurements. The point of maximum performance is found to occur at relatively low elasticity levels, with diodicity around 3.5. Based on a previously published numerical work [Ejlebjerg et al., Appl. Phys. Lett. 100, 234102 (2012)], 2D simulations of the FENE-CR differential constitutive model are also presented, but limited reproducibility and uncertainties of the experimental data prevent a direct comparison at low elasticity, where the flow is essentially two-dimensional. PMID:24324532

  2. Apollo-Soyuz pamphlet no. 4: Gravitational field. [experimental design

    NASA Technical Reports Server (NTRS)

    Page, L. W.; From, T. P.

    1977-01-01

    Two Apollo Soyuz experiments designed to detect gravity anomalies from spacecraft motion are described. The geodynamics experiment (MA-128) measured large-scale gravity anomalies by detecting small accelerations of Apollo in the 222 km orbit, using Doppler tracking from the ATS-6 satellite. Experiment MA-089 measured 300 km anomalies on the earth's surface by detecting minute changes in the separation between Apollo and the docking module. Topics discussed in relation to these experiments include the Doppler effect, gravimeters, and the discovery of mascons on the moon.

  3. Experimental Design for Stochastic Models of Nonlinear Signaling Pathways Using an Interval-Wise Linear Noise Approximation and State Estimation

    PubMed Central

    Zimmer, Christoph

    2016-01-01

    Background Computational modeling is a key technique for analyzing models in systems biology. There are well established methods for the estimation of the kinetic parameters in models of ordinary differential equations (ODE). Experimental design techniques aim at devising experiments that maximize the information encoded in the data. For ODE models there are well established approaches for experimental design and even software tools. However, data from single cell experiments on signaling pathways in systems biology often shows intrinsic stochastic effects prompting the development of specialized methods. While simulation methods have been developed for decades and parameter estimation has been targeted for the last years, only very few articles focus on experimental design for stochastic models. Methods The Fisher information matrix is the central measure for experimental design as it evaluates the information an experiment provides for parameter estimation. This article suggest an approach to calculate a Fisher information matrix for models containing intrinsic stochasticity and high nonlinearity. The approach makes use of a recently suggested multiple shooting for stochastic systems (MSS) objective function. The Fisher information matrix is calculated by evaluating pseudo data with the MSS technique. Results The performance of the approach is evaluated with simulation studies on an Immigration-Death, a Lotka-Volterra, and a Calcium oscillation model. The Calcium oscillation model is a particularly appropriate case study as it contains the challenges inherent to signaling pathways: high nonlinearity, intrinsic stochasticity, a qualitatively different behavior from an ODE solution, and partial observability. The computational speed of the MSS approach for the Fisher information matrix allows for an application in realistic size models. PMID:27583802

  4. A propagation effects handbook for satellite systems design. A summary of propagation impairments on 10-100 GHz satellite links, with techniques for system design. [tropospheric scattering

    NASA Technical Reports Server (NTRS)

    Kaul, R.; Wallace, R.; Kinal, G.

    1980-01-01

    This handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. Rain systems, rain and attenuation models, depolarization and experimental data are described. The design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. The questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results are addressed in order to bridge the gap between the propagation research data and the classical link budget analysis of Earth-space communications system.

  5. A propagation effects handbook for satellite systems design. A summary of propagation impairments on 10-100 GHz satellite links, with techniques for system design

    NASA Astrophysics Data System (ADS)

    Kaul, R.; Wallace, R.; Kinal, G.

    1980-03-01

    This handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. Rain systems, rain and attenuation models, depolarization and experimental data are described. The design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. The questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results are addressed in order to bridge the gap between the propagation research data and the classical link budget analysis of Earth-space communications system.

  6. Studying Ice Formation from Aircraft: Experimental Constraints on Techniques for Sampling Ice and Ice Forming Particles

    NASA Astrophysics Data System (ADS)

    Stith, J. L.

    2015-12-01

    A major experimental pathway to study the role of ice forming particles in clouds involves evaporating ice particles in a counterflow virtual impactor (CVI), measuring the residue with airborne instrumentation to determine the IFP concentration, and then comparing these concentrations with simultaneous measurements of ice concentrations, as determined from various types of instruments designed to measure hydrometeor concentrations. In order for these types of experiments to provide meaningful results, they must consider a number of factors, such as the impact of the CVI on the ice particles and the effects of probe tip shattering on the measurement of ice concentrations. These problems can be minimized by careful selection of sampling conditions and by studying the morphology of the sampled ice particles.

  7. An experimental technique for performing 3-D LDA measurements inside whirling annular seals

    NASA Technical Reports Server (NTRS)

    Morrison, Gerald L.; Johnson, Mark C.; Deotte, Robert E., Jr.; Thames, H. Davis, III.; Wiedner, Brian G.

    1992-01-01

    During the last several years, the Fluid Mechanics Division of the Turbomachinery Laboratory at Texas A&M University has developed a rather unique facility with the experimental capability for measuring the flow field inside journal bearings, labyrinth seals, and annular seals. The facility consists of a specially designed 3-D LDA system which is capable of measuring the instantaneous velocity vector within 0.2 mm of a wall while the laser beams are aligned almost perpendicular to the wall. This capability was required to measure the flow field inside journal bearings, labyrinth seals, and annular seals. A detailed description of this facility along with some representative results obtained for a whirling annular seal are presented.

  8. The ISR Asymmetrical Capacitor Thruster: Experimental Results and Improved Designs

    NASA Technical Reports Server (NTRS)

    Canning, Francis X.; Cole, John; Campbell, Jonathan; Winet, Edwin

    2004-01-01

    A variety of Asymmetrical Capacitor Thrusters has been built and tested at the Institute for Scientific Research (ISR). The thrust produced for various voltages has been measured, along with the current flowing, both between the plates and to ground through the air (or other gas). VHF radiation due to Trichel pulses has been measured and correlated over short time scales to the current flowing through the capacitor. A series of designs were tested, which were increasingly efficient. Sharp features on the leading capacitor surface (e.g., a disk) were found to increase the thrust. Surprisingly, combining that with sharp wires on the trailing edge of the device produced the largest thrust. Tests were performed for both polarizations of the applied voltage, and for grounding one or the other capacitor plate. In general (but not always) it was found that the direction of the thrust depended on the asymmetry of the capacitor rather than on the polarization of the voltage. While no force was measured in a vacuum, some suggested design changes are given for operation in reduced pressures.

  9. Shock-driven mixing: Experimental design and initial conditions

    NASA Astrophysics Data System (ADS)

    Friedman, Gavin; Prestridge, Katherine; Mejia-Alvarez, Ricardo; Leftwich, Megan

    2012-03-01

    A new Vertical Shock Tube (VST) has been designed to study shock-induced mixing due to the Richtmyer-Meshkov Instability (RMI) developing on a 3-D multi-mode interface between two gases. These studies characterize how interface contours, gas density difference, and Mach No. affect the ensuing mixing by using simultaneous measurements of velocity/density fields. The VST allows for the formation of a single stably-stratified interface, removing complexities of the dual interface used in prior RMI work. The VST also features a new diaphragmless driver, making feasible larger ensembles of data by reducing intra-shot time, and a larger viewing window allowing new observations of late-time mixing. The initial condition (IC) is formed by a co-flow system, chosen to minimize diffusion at the gas interface. To ensure statistically stationary ICs, a contoured nozzle has been manufactured to form repeatable co-flowing jets that are manipulated by a flapping splitter plate to generate perturbations that span the VST. This talk focuses on the design of the IC flow system and shows initial results characterizing the interface.

  10. Shock-Driven Mixing: Experimental Design and Initial Conditions

    NASA Astrophysics Data System (ADS)

    Friedman, Gavin; Prestridge, Kathy; Mejia-Alvarez, Ricardo; Leftwich, Megan

    2011-06-01

    A new Vertical Shock Tube (VST) has been designed to study shock-induced mixing due to the Richtmyer-Meshkov Instability (RMI) developing on a 3-D multi-mode interface between two gases. These studies characterize how interface contours, gas density difference, and Mach No. affect the ensuing mixing by using simultaneous measurements of velocity/density fields. The VST allows for the formation of a single stably-stratified interface, removing complexities of the dual interface used in prior RMI work. The VST also features a new diaphragmless driver, making feasible larger ensembles of data by reducing intra-shot time, and a larger viewing window allowing new observations of late-time mixing. The initial condition (IC) is formed by a co-flow system, chosen to minimize diffusion at the gas interface. To ensure statistically stationary ICs, a contoured nozzle has been manufactured to form repeatable co-flowing jets that are manipulated by a flapping splitter plate to generate perturbations that span the VST. This talk focuses on the design of the IC flow system and shows initial results characterizing the interface.

  11. The resisted rise of randomisation in experimental design: British agricultural science, c.1910-1930.

    PubMed

    Berry, Dominic

    2015-09-01

    The most conspicuous form of agricultural experiment is the field trial, and within the history of such trials, the arrival of the randomised control trial (RCT) is considered revolutionary. Originating with R.A. Fisher within British agricultural science in the 1920s and 1930s, the RCT has since become one of the most prodigiously used experimental techniques throughout the natural and social sciences. Philosophers of science have already scrutinised the epistemological uniqueness of RCTs, undermining their status as the 'gold standard' in experimental design. The present paper introduces a historical case study from the origins of the RCT, uncovering the initially cool reception given to this method by agricultural scientists at the University of Cambridge and the (Cambridge based) National Institute of Agricultural Botany. Rather than giving further attention to the RCT, the paper focuses instead on a competitor method-the half-drill strip-which both predated the RCT and remained in wide use for at least a decade beyond the latter's arrival. In telling this history, John Pickstone's Ways of Knowing is adopted, as the most flexible and productive way to write the history of science, particularly when sciences and scientists have to work across a number of different kinds of place. It is shown that those who resisted the RCT did so in order to preserve epistemic and social goals that randomisation would have otherwise run a tractor through. PMID:26205200

  12. A Novel Experimental Technique for the Study of High-Speed Friction under Elastic Loading Conditions

    NASA Astrophysics Data System (ADS)

    Crawford, Paula; Rainey, Kevin; Rightley, Paul; Hammerberg, J. E.

    2004-07-01

    The role of friction in high strain-rate events is not well understood despite being an important constitutive relationship in modern modeling and simulation studies of explosive events. There is a lack of experimental data available for the validation ofmodels of dynamic sliding. The Rotating Barrel Gas Gun (RBGG) is a novel, small-scale experimental facility designed to investigate interfacial dynamics at high loads and sliding speeds. The RBGG utilizes a low-pressure gas gun to propel a rotating annular projectile towards an annular target rod. Upon striking the target, the projectile imparts both an axial and a torsional impulse into the target at a timescale relevant to explosively-driven events. Resulting elastic waves are measured using strain gages attached to the target rod. The coefficient of friction is obtained through an analysis of the resulting strain wave data. Initial experiments have been performed using dry copper/copper interfaces. We find that the measured coefficient of friction can evolve significantly over a 30 μs event.

  13. Single-Case Experimental Designs: A Systematic Review of Published Research and Current Standards

    ERIC Educational Resources Information Center

    Smith, Justin D.

    2012-01-01

    This article systematically reviews the research design and methodological characteristics of single-case experimental design (SCED) research published in peer-reviewed journals between 2000 and 2010. SCEDs provide researchers with a flexible and viable alternative to group designs with large sample sizes. However, methodological challenges have…

  14. A rational design change methodology based on experimental and analytical modal analysis

    SciTech Connect

    Weinacht, D.J.; Bennett, J.G.

    1993-08-01

    A design methodology that integrates analytical modeling and experimental characterization is presented. This methodology represents a powerful tool for making rational design decisions and changes. An example of its implementation in the design, analysis, and testing of a precisions machine tool support structure is given.

  15. A Modified Experimental Hut Design for Studying Responses of Disease-Transmitting Mosquitoes to Indoor Interventions: The Ifakara Experimental Huts

    PubMed Central

    Okumu, Fredros O.; Moore, Jason; Mbeyela, Edgar; Sherlock, Mark; Sangusangu, Robert; Ligamba, Godfrey; Russell, Tanya; Moore, Sarah J.

    2012-01-01

    Differences between individual human houses can confound results of studies aimed at evaluating indoor vector control interventions such as insecticide treated nets (ITNs) and indoor residual insecticide spraying (IRS). Specially designed and standardised experimental huts have historically provided a solution to this challenge, with an added advantage that they can be fitted with special interception traps to sample entering or exiting mosquitoes. However, many of these experimental hut designs have a number of limitations, for example: 1) inability to sample mosquitoes on all sides of huts, 2) increased likelihood of live mosquitoes flying out of the huts, leaving mainly dead ones, 3) difficulties of cleaning the huts when a new insecticide is to be tested, and 4) the generally small size of the experimental huts, which can misrepresent actual local house sizes or airflow dynamics in the local houses. Here, we describe a modified experimental hut design - The Ifakara Experimental Huts- and explain how these huts can be used to more realistically monitor behavioural and physiological responses of wild, free-flying disease-transmitting mosquitoes, including the African malaria vectors of the species complexes Anopheles gambiae and Anopheles funestus, to indoor vector control-technologies including ITNs and IRS. Important characteristics of the Ifakara experimental huts include: 1) interception traps fitted onto eave spaces and windows, 2) use of eave baffles (panels that direct mosquito movement) to control exit of live mosquitoes through the eave spaces, 3) use of replaceable wall panels and ceilings, which allow safe insecticide disposal and reuse of the huts to test different insecticides in successive periods, 4) the kit format of the huts allowing portability and 5) an improved suite of entomological procedures to maximise data quality. PMID:22347415

  16. Submillimeter Measurements of Photolysis Products in Interstellar Ice Analogs: A New Experimental Technique

    NASA Technical Reports Server (NTRS)

    Milam, Stefanie N.; Weaver, Susanna Widicus

    2012-01-01

    Over 150 molecular species have been confirmed in space, primarily by their rotational spectra at millimeter/submillimeter wavelengths, which yield an unambiguous identification. Many of the known interstellar organic molecules cannot be explained by gas-phase chemistry. It is now presumed that they are produced by surface reactions of the simple ices and/or grains observed and released into the gas phase by sublimation, sputtering, etc. Additionally, the chemical complexity found in meteorites and samples returned from comets far surpasses that of the remote detections for the interstellar medium (ISM), comets, and planetary atmospheres. Laboratory simulations of interstellar/cometary ices have found, from the analysis of the remnant residue of the warmed laboratory sample, that such molecules are readily formed; however, it has yet to be determined if they are formed during the warm phase or within the ice during processing. Most analysis of the ice during processing reveals molecular changes, though the exact quantities and species formed are highly uncertain with current techniques due to overwhelming features of simple ices. Remote sensing with high resolution spectroscopy is currently the only method to detect trace species in the ISM and the primary method for comets and icy bodies in the Solar System due to limitations of sample return. We have recently designed an experiment to simulate interstellar/cometary/planetary ices and detect trace species employing the same techniques used for remote observations. Preliminary results will be presented.

  17. Experimental, computational, and analytical techniques for diagnosing breast cancer using optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Palmer, Gregory M.

    This dissertation presents the results of an investigation into experimental, computational, and analytical methodologies for diagnosing breast cancer using fluorescence and diffuse reflectance spectroscopy. First, the optimal experimental methodology for tissue biopsy studies was determined using an animal study. It was found that the use of freshly excised tissue samples preserved the original spectral line shape and magnitude of the fluorescence and diffuse reflectance. Having established the optimal experimental methodology, a clinical study investigating the use of fluorescence and diffuse reflectance spectroscopy for the diagnosis of breast cancer was undertaken. In addition, Monte Carlo-based models of diffuse reflectance and fluorescence were developed and validated to interpret these data. These models enable the extraction of physically meaningful information from the measured spectra, including absorber concentrations, and scattering and intrinsic fluorescence properties. The model was applied to the measured spectra, and using a support vector machine classification algorithm based on physical features extracted from the diffuse reflectance spectra, it was found that breast cancer could be diagnosed with a cross-validated sensitivity and specificity of 82% and 92%, respectively, which are substantially better than that obtained using a conventional, empirical algorithm. It was found that malignant tissues had lower hemoglobin oxygen saturation, were more scattering, and had lower beta-carotene concentration, relative to the non-malignant tissues. It was also found that the fluorescence model could successfully extract the intrinsic fluorescence line shape from tissue samples. One limitation of the previous study is that a priori knowledge of the tissue's absorbers and scatterers is required. To address this limitation, and to improve upon the method with which fiber optic probes are designed, an alternate approach was developed. This method used a

  18. Computational Design of Creep-Resistant Alloys and Experimental Validation in Ferritic Superalloys

    SciTech Connect

    Liaw, Peter

    2014-12-31

    A new class of ferritic superalloys containing B2-type zones inside parent L21-type precipitates in a disordered solid-solution matrix, also known as a hierarchical-precipitate strengthened ferritic alloy (HPSFA), has been developed for high-temperature structural applications in fossil-energy power plants. These alloys were designed by the addition of the Ti element into a previously-studied NiAl-strengthened ferritic alloy (denoted as FBB8 in this study). In the present research, systematic investigations, including advanced experimental techniques, first-principles calculations, and numerical simulations, have been integrated and conducted to characterize the complex microstructures and excellent creep resistance of HPSFAs. The experimental techniques include transmission-electron microscopy, scanningtransmission- electron microscopy, neutron diffraction, and atom-probe tomography, which provide detailed microstructural information of HPSFAs. Systematic tension/compression creep tests revealed that HPSFAs exhibit the superior creep resistance, compared with the FBB8 and conventional ferritic steels (i.e., the creep rates of HPSFAs are about 4 orders of magnitude slower than the FBB8 and conventional ferritic steels.) First-principles calculations include interfacial free energies, anti-phase boundary (APB) free energies, elastic constants, and impurity diffusivities in Fe. Combined with kinetic Monte- Carlo simulations of interdiffusion coefficients, and the integration of computational thermodynamics and kinetics, these calculations provide great understanding of thermodynamic and mechanical properties of HPSFAs. In addition to the systematic experimental approach and first-principles calculations, a series of numerical tools and algorithms, which assist in the optimization of creep properties of ferritic superalloys, are utilized and developed. These numerical simulation results are compared with the available experimental data and previous first

  19. Design and testing of 15kv to 35kv porcelain terminations using new connection techniques

    SciTech Connect

    Fox, J.W.; Hill, R.J.

    1982-07-01

    New techniques for conductor connection in underground cable terminations were investigated in a design of a new distribution class porcelain cable termination. Connections to the conductor were accomplished using set screws, building upon previous designs with additions to assure a conservative design approach. The connector design was tested according to applicable standards for load cycling of connections, and the result appears capable of conservative performance in the operating environment.

  20. Experimental design and simulation of a metal hydride hydrogen storage system

    NASA Astrophysics Data System (ADS)

    Gadre, Sarang Ajit

    internal geometric design point of view. At the same time, the statistical design of experiments approach was shown to be a very efficient technique for identifying the most important process parameters that affect the performance of the metal hydride hydrogen storage unit with minimal experimental effort.

  1. Patient reactions to personalized medicine vignettes: An experimental design

    PubMed Central

    Butrick, Morgan; Roter, Debra; Kaphingst, Kimberly; Erby, Lori H.; Haywood, Carlton; Beach, Mary Catherine; Levy, Howard P.

    2011-01-01

    Purpose Translational investigation on personalized medicine is in its infancy. Exploratory studies reveal attitudinal barriers to “race-based medicine” and cautious optimism regarding genetically personalized medicine. This study describes patient responses to hypothetical conventional, race-based, or genetically personalized medicine prescriptions. Methods Three hundred eighty-seven participants (mean age = 47 years; 46% white) recruited from a Baltimore outpatient center were randomized to this vignette-based experimental study. They were asked to imagine a doctor diagnosing a condition and prescribing them one of three medications. The outcomes are emotional response to vignette, belief in vignette medication efficacy, experience of respect, trust in the vignette physician, and adherence intention. Results Race-based medicine vignettes were appraised more negatively than conventional vignettes across the board (Cohen’s d = −0.51−0.57−0.64, P < 0.001). Participants rated genetically personalized comparably with conventional medicine (− 0.14−0.15−0.17, P = 0.47), with the exception of reduced adherence intention to genetically personalized medicine (Cohen’s d = −0.38−0.41−0.44, P = 0.009). This relative reluctance to take genetically personalized medicine was pronounced for racial minorities (Cohen’s d =−0.38−0.31−0.25, P = 0.02) and was related to trust in the vignette physician (change in R2 = 0.23, P < 0.001). Conclusions This study demonstrates a relative reluctance to embrace personalized medicine technology, especially among racial minorities, and highlights enhancement of adherence through improved doctor-patient relationships. PMID:21270639

  2. Design of a digital voice data compression technique for orbiter voice channels

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Candidate techniques were investigated for digital voice compression to a transmission rate of 8 kbps. Good voice quality, speaker recognition, and robustness in the presence of error bursts were considered. The technique of delayed-decision adaptive predictive coding is described and compared with conventional adaptive predictive coding. Results include a set of experimental simulations recorded on analog tape. The two FM broadcast segments produced show the delayed-decision technique to be virtually undegraded or minimally degraded at .001 and .01 Viterbi decoder bit error rates. Preliminary estimates of the hardware complexity of this technique indicate potential for implementation in space shuttle orbiters.

  3. Development and Experimental Verification of Key Techniques to Validate Remote Sensing Products

    NASA Astrophysics Data System (ADS)

    Li, X.; Wang, S. G.; Ge, Y.; Jin, R.; Liu, S. M.; Ma, M. G.; Shi, W. Z.; Li, R. X.; Liu, Q. H.

    2013-05-01

    Validation of remote sensing land products is a fundamental issue for Earth observation. Ministry of Science and Technology of the People's Republic of China (MOST) has launched a high-tech R&D Program named `Development and experimental verification of key techniques to validate remote sensing products' in 2011. This paper introduces the background, scientific objectives, research contents of this project and research result already achieved. The objectives of this project include (1) to build a technical specification for the validation of remote sensing products; (2) to investigate the performance, we will carry out a comprehensive remote sensing experiment on satellite - aircraft - ground truth and then modify Step 1 until reach the predefined requirement; (3) to establish a validation network of China for remote sensing products. In summer 2012, with support of the Heihe Watershed Allied Telemetry Experimental Research (HiWATER), field observations have been successfully conducted in the central stream of the Heihe River Basin, a typical inland river basin in northwest China. A flux observation matrix composed of eddy covariance (EC) and large aperture scintillometer (LAS), in addition to a densely distributed eco-hydrological wireless sensor network have been established to capture multi-scale heterogeneities of evapotranspiration (ET), leaf area index (LAI), soil moisture and temperature. Airborne missions have been flown with the payloads of imaging spectrometer, light detection and ranging (LiDAR), infrared thermal imager and microwave radiometer that provide various scales of aerial remote sensing observations. Satellite images with high resolution have been collected and pre-processed, e.g. PROBA-CHRIS and TerraSAR-X. Simultaneously, ground measurements have been conducted over specific sampling plots and transects to obtain validation data sets. With this setup complex problems are addressed, e.g. heterogeneity, scaling, uncertainty, and eventually to

  4. Experimental Studies of Active and Passive Flow Control Techniques Applied in a Twin Air-Intake

    PubMed Central

    Joshi, Shrey; Jindal, Aman; Maurya, Shivam P.; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG. PMID:23935422

  5. Experimental studies of active and passive flow control techniques applied in a twin air-intake.

    PubMed

    Paul, Akshoy Ranjan; Joshi, Shrey; Jindal, Aman; Maurya, Shivam P; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG.

  6. Estimating intervention effects across different types of single-subject experimental designs: empirical illustration.

    PubMed

    Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Onghena, Patrick; Heyvaert, Mieke; Beretvas, S Natasha; Van den Noortgate, Wim

    2015-03-01

    The purpose of this study is to illustrate the multilevel meta-analysis of results from single-subject experimental designs of different types, including AB phase designs, multiple-baseline designs, ABAB reversal designs, and alternating treatment designs. Current methodological work on the meta-analysis of single-subject experimental designs often focuses on combining simple AB phase designs or multiple-baseline designs. We discuss the estimation of the average intervention effect estimate across different types of single-subject experimental designs using several multilevel meta-analytic models. We illustrate the different models using a reanalysis of a meta-analysis of single-subject experimental designs (Heyvaert, Saenen, Maes, & Onghena, in press). The intervention effect estimates using univariate 3-level models differ from those obtained using a multivariate 3-level model that takes the dependence between effect sizes into account. Because different results are obtained and the multivariate model has multiple advantages, including more information and smaller standard errors, we recommend researchers to use the multivariate multilevel model to meta-analyze studies that utilize different single-subject designs.

  7. Visions of visualization aids: Design philosophy and experimental results

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.

    1990-01-01

    Aids for the visualization of high-dimensional scientific or other data must be designed. Simply casting multidimensional data into a two- or three-dimensional spatial metaphor does not guarantee that the presentation will provide insight or parsimonious description of the phenomena underlying the data. Indeed, the communication of the essential meaning of some multidimensional data may be obscured by presentation in a spatially distributed format. Useful visualization is generally based on pre-existing theoretical beliefs concerning the underlying phenomena which guide selection and formatting of the plotted variables. Two examples from chaotic dynamics are used to illustrate how a visulaization may be an aid to insight. Two examples of displays to aid spatial maneuvering are described. The first, a perspective format for a commercial air traffic display, illustrates how geometric distortion may be introduced to insure that an operator can understand a depicted three-dimensional situation. The second, a display for planning small spacecraft maneuvers, illustrates how the complex counterintuitive character of orbital maneuvering may be made more tractable by removing higher-order nonlinear control dynamics, and allowing independent satisfaction of velocity and plume impingement constraints on orbital changes.

  8. Experimental techniques for characterizing the thermo-electro-mechanical shakedown response of SMA wires and tubes

    NASA Astrophysics Data System (ADS)

    Churchill, Christopher B.

    Shape Memory Alloys (SMAs) are a unique and valuable group of active materials. NiTi, the most popular SMA, has a power density orders of magnitude greater than any other known material, making it valuable in the medical and transportation industries where weight and space are at a premium. In the nearly half-century since its discovery, the adoption of NiTi has been slowed primarily by the engineering difficulties associated with its use: strong thermal coupling, material level instabilities, and rapid shakedown of material properties during cycling. Material properties change drastically with minute changes in alloy composition, so it is common to require a variety of experiments to fully characterize a new SMA material, all of which must be performed and interpreted with specialized techniques. This thesis collects many of these techniques into a series of characterization experiments, documenting several new phenomena in the process. First, three different alloys of NiTi wire are characterized through differential scanning calorimetry, isothermal tension, and constant load thermal cycling experiments. New techniques are presented for ER measurement and temperature control of SMA wires and temperature measurement of SMA tubes. It is shown that the shakedown of material properties with thermal cycling is not only dependent on the applied load and number of cycles, but has a large association with the direction of phase transformation. Several of these techniques are then applied to a systematic characterization of NiTi tubes in tension, compression, and bending. Particular attention is given to the nucleation and propagation of transformation fronts in tensile specimens. Compression experiments show dramatic asymmetry in the uniaxial response, with compression characterized by a lower transformation strain, higher transformation stress, and uniform transformations (no fronts). A very simple SMA actuator model is introduced. After identifying the relevant non

  9. Development and Comparison of Techniques for Generating Permeability Maps using Independent Experimental Approaches

    NASA Astrophysics Data System (ADS)

    Hingerl, Ferdinand; Romanenko, Konstantin; Pini, Ronny; Balcom, Bruce; Benson, Sally

    2014-05-01

    We have developed and evaluated methods for creating voxel-based 3D permeability maps of a heterogeneous sandstone sample using independent experimental data from single phase flow (Magnetic Resonance Imaging, MRI) and two-phase flow (X-ray Computed Tomography, CT) measurements. Fluid velocities computed from the generated permeability maps using computational fluid dynamics simulations fit measured velocities very well and significantly outperform empirical porosity-permeability relations, such as the Kozeny-Carman equation. Acquiring images on the meso-scale from porous rocks using MRI has till recently been a great challenge, due to short spin relaxation times and large field gradients within the sample. The combination of the 13-interval Alternating-Pulsed-Gradient Stimulated-Echo (APGSTE) scheme with three-dimensional Single Point Ramped Imaging with T1 Enhancement (SPRITE) - a technique recently developed at the UNB MRI Center - can overcome these challenges and enables obtaining quantitative 3 dimensional maps of porosities and fluid velocities. Using porosity and (single-phase) velocity maps from MRI and (multi-phase) saturation maps from CT measurements, we employed three different techniques to obtain permeability maps. In the first approach, we applied the Kozeny-Carman relationship to porosities measured using MRI. In the second approach, we computed permeabilities using a J-Leverett scaling method, which is based on saturation maps obtained from N2-H2O multi-phase experiments. The third set of permeabilities was generated using a new inverse iterative-updating technique, which is based on porosities and measured velocities obtained in single-phase flow experiments. The resulting three permeability maps provided then input for computational fluid dynamics simulations - employing the Stanford CFD code AD-GPRS - to generate velocity maps, which were compared to velocity maps measured by MRI. The J-Leveret scaling method and the iterative-updating method

  10. Strong Lens Time Delay Challenge. I. Experimental Design

    NASA Astrophysics Data System (ADS)

    Dobler, Gregory; Fassnacht, Christopher D.; Treu, Tommaso; Marshall, Phil; Liao, Kai; Hojjati, Alireza; Linder, Eric; Rumbaugh, Nicholas

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ~103 strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a "Time Delay Challenge" (TDC). The challenge is organized as a set of "ladders," each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  11. STRONG LENS TIME DELAY CHALLENGE. I. EXPERIMENTAL DESIGN

    SciTech Connect

    Dobler, Gregory; Fassnacht, Christopher D.; Rumbaugh, Nicholas; Treu, Tommaso; Liao, Kai; Marshall, Phil; Hojjati, Alireza; Linder, Eric

    2015-02-01

    The time delays between point-like images in gravitational lens systems can be used to measure cosmological parameters. The number of lenses with measured time delays is growing rapidly; the upcoming Large Synoptic Survey Telescope (LSST) will monitor ∼10{sup 3} strongly lensed quasars. In an effort to assess the present capabilities of the community, to accurately measure the time delays, and to provide input to dedicated monitoring campaigns and future LSST cosmology feasibility studies, we have invited the community to take part in a ''Time Delay Challenge'' (TDC). The challenge is organized as a set of ''ladders'', each containing a group of simulated data sets to be analyzed blindly by participating teams. Each rung on a ladder consists of a set of realistic mock observed lensed quasar light curves, with the rungs' data sets increasing in complexity and realism. The initial challenge described here has two ladders, TDC0 and TDC1. TDC0 has a small number of data sets, and is designed to be used as a practice set by the participating teams. The (non-mandatory) deadline for completion of TDC0 was the TDC1 launch date, 2013 December 1. The TDC1 deadline was 2014 July 1. Here we give an overview of the challenge, we introduce a set of metrics that will be used to quantify the goodness of fit, efficiency, precision, and accuracy of the algorithms, and we present the results of TDC0. Thirteen teams participated in TDC0 using 47 different methods. Seven of those teams qualified for TDC1, which is described in the companion paper.

  12. Tensile-shear correlations obtained from shear punch test technique using a modified experimental approach

    NASA Astrophysics Data System (ADS)

    Karthik, V.; Visweswaran, P.; Vijayraghavan, A.; Kasiviswanathan, K. V.; Raj, Baldev

    2009-09-01

    Shear punch testing has been a very useful technique for evaluating mechanical properties of irradiated alloys using a very small volume of material. The load-displacement data is influenced by the compliance of the fixture components. This paper describes a modified experimental approach where the compliances of the punch and die components are eliminated. The analysis of the load-displacement data using the modified setup for various alloys like low carbon steel, SS316, modified 9Cr-1Mo, 2.25Cr-1Mo indicate that the shear yield strength evaluated at 0.2% offset of normalized displacement relates to the tensile YS as per the Von Mises yield relation ( σys = 1.73 τys). A universal correlation of type UTS = mτmax where m is a function of strain hardening exponent, is seen to be obeyed for all the materials in this study. The use of analytical models developed for blanking process are explored for evaluating strain hardening exponent from the load-displacement data. This study is directed towards rationalizing the tensile-shear empirical correlations for a more reliable prediction of tensile properties from shear punch tests.

  13. Bayesian experimental design of a multichannel interferometer for Wendelstein 7-Xa)

    NASA Astrophysics Data System (ADS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hirsch, M.; Kornejew, P.

    2008-10-01

    Bayesian experimental design (BED) is a framework for the optimization of diagnostics basing on probability theory. In this work it is applied to the design of a multichannel interferometer at the Wendelstein 7-X stellarator experiment. BED offers the possibility to compare diverse designs quantitatively, which will be shown for beam-line designs resulting from different plasma configurations. The applicability of this method is discussed with respect to its computational effort.

  14. An Experimentally Validated Numerical Modeling Technique for Perforated Plate Heat Exchangers

    PubMed Central

    Nellis, G. F.; Kelin, S. A.; Zhu, W.; Gianchandani, Y.

    2010-01-01

    Cryogenic and high-temperature systems often require compact heat exchangers with a high resistance to axial conduction in order to control the heat transfer induced by axial temperature differences. One attractive design for such applications is a perforated plate heat exchanger that utilizes high conductivity perforated plates to provide the stream-to-stream heat transfer and low conductivity spacers to prevent axial conduction between the perforated plates. This paper presents a numerical model of a perforated plate heat exchanger that accounts for axial conduction, external parasitic heat loads, variable fluid and material properties, and conduction to and from the ends of the heat exchanger. The numerical model is validated by experimentally testing several perforated plate heat exchangers that are fabricated using microelectromechanical systems based manufacturing methods. This type of heat exchanger was investigated for potential use in a cryosurgical probe. One of these heat exchangers included perforated plates with integrated platinum resistance thermometers. These plates provided in situ measurements of the internal temperature distribution in addition to the temperature, pressure, and flow rate measured at the inlet and exit ports of the device. The platinum wires were deposited between the fluid passages on the perforated plate and are used to measure the temperature at the interface between the wall material and the flowing fluid. The experimental testing demonstrates the ability of the numerical model to accurately predict both the overall performance and the internal temperature distribution of perforated plate heat exchangers over a range of geometry and operating conditions. The parameters that were varied include the axial length, temperature range, mass flow rate, and working fluid. PMID:20976021

  15. Monte Carlo techniques for scattering foil design and dosimetry in total skin electron irradiations.

    PubMed

    Ye, Sung-Joon; Pareek, Prem N; Spencer, Sharon; Duan, Jun; Brezovich, Ivan A

    2005-06-01

    Total skin electron irradiation (TSEI) with single fields requires large electron beams having good dose uniformity, dmax at the skin surface, and low bremsstrahlung contamination. To satisfy these requirements, energy degraders and scattering foils have to be specially designed for the given accelerator and treatment room. We used Monte Carlo (MC) techniques based on EGS4 user codes (BEAM, DOSXYZ, and DOSRZ) as a guide in the beam modifier design of our TSEI system. The dosimetric characteristics at the treatment distance of 382 cm source-to-surface distance (SSD) were verified experimentally using a linear array of 47 ion chambers, a parallel plate chamber, and radiochromic film. By matching MC simulations to standard beam measurements at 100 cm SSD, the parameters of the electron beam incident on the vacuum window were determined. Best match was achieved assuming that electrons were monoenergetic at 6.72 MeV, parallel, and distributed in a circular pattern having a Gaussian radial distribution with full width at half maximum = 0.13 cm. These parameters were then used to simulate our TSEI unit with various scattering foils. Two of the foils were fabricated and experimentally evaluated by measuring off-axis dose uniformity and depth doses. A scattering foil, consisting of a 12 x 12 cm2 aluminum plate of 0.6 cm thickness and placed at isocenter perpendicular to the beam direction, was considered optimal. It produced a beam that was flat within +/-3% up to 60 cm off-axis distance, dropped by not more than 8% at a distance of 90 cm, and had an x-ray contamination of <3%. For stationary beams, MC-computed dmax, Rp, and R50 agreed with measurements within 0.5 mm. The MC-predicted surface dose of the rotating phantom was 41% of the dose rate at dmax of the stationary phantom, whereas our calculations based on a semiempirical formula in the literature yielded a drop to 42%. The MC simulations provided the guideline of beam modifier design for TSEI and estimated the

  16. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.

    2013-11-01

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c 2 mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  17. Experimental concept and design of DarkLight, a search for a heavy photon

    SciTech Connect

    Cowan, Ray F.; Collaboration: DarkLight Collaboration

    2013-11-07

    This talk gives an overview of the DarkLight experimental concept: a search for a heavy photon A′ in the 10-90 MeV/c{sup 2} mass range. After briefly describing the theoretical motivation, the talk focuses on the experimental concept and design. Topics include operation using a half-megawatt, 100 MeV electron beam at the Jefferson Lab FEL, detector design and performance, and expected backgrounds estimated from beam tests and Monte Carlo simulations.

  18. Neuroimaging in aphasia treatment research: Issues of experimental design for relating cognitive to neural changes

    PubMed Central

    Rapp, Brenda; Caplan, David; Edwards, Susan; Visch-Brink, Evy; Thompson, Cynthia K.

    2012-01-01

    The design of functional neuroimaging studies investigating the neural changes that support treatment-based recovery of targeted language functions in acquired aphasia faces a number of challenges. In this paper, we discuss these challenges and focus on experimental tasks and experimental designs that can be used to address the challenges, facilitate the interpretation of results and promote integration of findings across studies. PMID:22974976

  19. Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    D'Costa, Allison R.; Schlueter, Mark A.

    2013-01-01

    Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…

  20. Development of Observation Techniques in Reactor Vessel of Experimental Fast Reactor Joyo

    NASA Astrophysics Data System (ADS)

    Takamatsu, Misao; Imaizumi, Kazuyuki; Nagai, Akinori; Sekine, Takashi; Maeda, Yukimoto

    In-Vessel Observations (IVO) techniques for Sodium cooled Fast Reactors (SFRs) are important in confirming its safety and integrity. And several IVO equipments for an SFR are developed. However, in order to secure the reliability of IVO techniques, it was necessary to demonstrate the performance under the actual reactor environment with high temperature, high radiation dose and remained sodium. During the investigation of an incident that occurred with Joyo, IVO using a standard Video Camera (VC) and a Radiation-Resistant Fiberscope (RRF) took place at (1) the top of the Sub-Assemblies (S/As) and the In-Vessel Storage rack (IVS), (2) the bottom face of the Upper Core Structure (UCS). A simple 6 m overhead view of each S/A, through the fuel handling or inspection holes etc, was photographed using a VC for making observations of the top of S/As and IVS. About 650 photographs were required to create a composite photograph of the top of the entire S/As and IVS, and a resolution was estimated to be approximately 1mm. In order to observe the bottom face of the UCS, a Remote Handling Device (RHD) equipped with RRFs (approximately 13 m long) was specifically developed for Joyo with a tip that could be inserted into the 70 mm gap between the top of the S/As and the bottom of the UCS. A total of about 35,000 photographs were needed for the full investigation. Regarding the resolution, the sodium flow regulating grid of 0.8mm in thickness could be discriminated. The performance of IVO equipments under the actual reactor environment was successfully confirmed. And the results provided useful information on incident investigations. In addition, fundamental findings and the experience gained during this study, which included the design of equipment, operating procedures, resolution, lighting adjustments, photograph composition and the durability of the RRF under radiation exposure, provided valuable insights into further improvements and verifications for IVO techniques to

  1. Design of Optical Systems with Extended Depth of Field: An Educational Approach to Wavefront Coding Techniques

    ERIC Educational Resources Information Center

    Ferran, C.; Bosch, S.; Carnicer, A.

    2012-01-01

    A practical activity designed to introduce wavefront coding techniques as a method to extend the depth of field in optical systems is presented. The activity is suitable for advanced undergraduate students since it combines different topics in optical engineering such as optical system design, aberration theory, Fourier optics, and digital image…

  2. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    NASA Astrophysics Data System (ADS)

    Guarino, Vincenzo; Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-01

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial's manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell-biomaterial and cell- cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  3. Electro fluido dynamic techniques to design instructive biomaterials for tissue engineering and drug delivery

    SciTech Connect

    Guarino, Vincenzo Altobelli, Rosaria; Cirillo, Valentina; Ambrosio, Luigi

    2015-12-17

    A large variety of processes and tools is continuously investigated to discover new solutions to design instructive materials with controlled chemical, physical and biological properties for tissue engineering and drug delivery. Among them, electro fluido dynamic techniques (EFDTs) are emerging as an interesting strategy, based on highly flexible and low-cost processes, to revisit old biomaterial’s manufacturing approach by utilizing electrostatic forces as the driving force for the fabrication of 3D architectures with controlled physical and chemical functionalities to guide in vitro and in vivo cell activities. By a rational selection of polymer solution properties and process conditions, EFDTs allow to produce fibres and/or particles at micro and/or nanometric size scale which may be variously assembled by tailored experimental setups, thus giving the chance to generate a plethora of different 3D devices able to incorporate biopolymers (i.e., proteins, polysaccharides) or active molecules (e.g., drugs) for different applications. Here, we focus on the optimization of basic EFDTs - namely electrospinning, electrospraying and electrodynamic atomization - to develop active platforms (i.e., monocomponent, protein and drug loaded scaffolds and µ-scaffolds) made of synthetic (PCL, PLGA) or natural (chitosan, alginate) polymers. In particular, we investigate how to set materials and process parameters to impart specific morphological, biochemical or physical cues to trigger all the fundamental cell–biomaterial and cell– cell cross-talking elicited during regenerative processes, in order to reproduce the complex microenvironment of native or pathological tissues.

  4. Design studies for the transmission simulator method of experimental dynamic substructuring.

    SciTech Connect

    Mayes, Randall Lee; Arviso, Michael

    2010-05-01

    In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.

  5. Shape and Surface: The challenges and advantages of 3D techniques in innovative fashion, knitwear and product design

    NASA Astrophysics Data System (ADS)

    Bendt, E.

    2016-07-01

    The presentation wants to show what kind of problems fashion and textile designers are facing in 3D-knitwear design, especially regarding fashionable flat-knit styles, and how they can use different kinds of techniques and processes to generate new types of 3D-designs and structures. To create really new things we have to overcome standard development methods and traditional thinking and should start to open our minds again for the material itself to generate new advanced textile solutions. This paper mainly introduces different results of research projects worked out in the master program “Textile Produkte” during lectures in “Innovative Product Design” and “Experimental Knitting”.

  6. Artificial tektites: an experimental technique for capturing the shapes of spinning drops

    NASA Astrophysics Data System (ADS)

    Baldwin, K. A.

    2014-12-01

    Tektites are small stones formed from rapidly cooling drops of molten rock ejected from high velocity asteroid impacts with the Earth, that freeze into a myriad of shapes during flight. Many splash-form tektites have an elongated or dumb-bell shape owing to their rotation prior to solidification[1]. Here we present a novel method for creating 'artificial tektites' from spinning drops of molten wax, using diamagnetic levitation to suspend the drops[2]. We find that the solid wax models produced this way are the stable equilibrium shapes of a spinning liquid droplet held together by surface tension. In addition to the geophysical interest in tektite formation, the stable equilibrium shapes of liquid drops have implications for many physical phenomena, covering a wide range of length scales, from nuclear physics (e.g. in studies of rapidly rotating atomic nuclei), to astrophysics (e.g. in studies of the shapes of astronomical bodies such as asteroids, rapidly rotating stars and event horizons of rotating black holes). For liquid drops bound by surface tension, analytical and numerical methods predict a series of stable equilibrium shapes with increasing angular momentum. Slowly spinning drops have an oblate-like shape. With increasing angular momentum these shapes become secularly unstable to a series of triaxial pseudo-ellipsoids that then evolve into a family of two-lobed 'dumb-bell' shapes as the angular momentum is increased still further. Our experimental method allows accurate measurements of the drops to be taken, which are useful to validate numerical models. This method has provided a means for observing tektite formation, and has additionally confirmed experimentally the stable equilibrium shapes of liquid drops, distinct from the equivalent shapes of rotating astronomical bodies. Potentially, this technique could be applied to observe the non-equilibrium dynamic processes that are also important in real tektite formation, involving, e.g. viscoelastic

  7. A smart experimental technique for the optimization of dielectric elastomer actuator (DEA) systems

    NASA Astrophysics Data System (ADS)

    Hodgins, M.; Rizzello, G.; York, A.; Naso, D.; Seelecke, S.

    2015-09-01

    In order to aid in moving dielectric elastomer actuator (DEA) technology from the laboratory into a commercial product DEA prototypes should be tested against a variety of loading conditions and eventually in the end user conditions. An experimental test setup to seamlessly perform mechanical characterization and loading of the DEA would be a great asset toward this end. Therefore, this work presents the design, control and systematic validation of a benchtop testing station for miniature silicon based circular DEAs. A versatile benchtop tester is able to characterize and apply programmable loading forces to the DEA while measuring actuator performance. The tester successfully applied mechanical loads to the DEA (including positive, constant and negative stiffness loads) simulating biasing systems via an electromagnetic linear motor operating in closed loop with a force/mechanical impedance control scheme. The tester expedites mechanical testing of the DEA by eliminating the need to build intricate pre-load mechanisms or use multiple testing jigs for characterizing the DEA response. The results show that proper mechanical loading of the DEA increases the overall electromechanical sensitivity of the system and thereby the actuator output. This approach to characterize and apply variable loading forces to DEAs in a single test system will enable faster realization of higher performance actuators.

  8. Simulation and Prototype Design of Variable Step Angle Techniques Based Asteroid Deflection for Future Planetary Mission

    NASA Astrophysics Data System (ADS)

    Sathiyavel, C.

    2016-07-01

    Asteroids are minor planets, especially those of the inner Solar System. The most desirable asteroids for cross the geo-synchronous orbit are the carbonaceous C-type asteroids that are deemed by the astronomy community to have a planetary protection categorization of unrestricted Earth return. The mass of near earth Asteroids (assuming spherical asteroid) as a function of its diameter varies from 2 m to 10m, the corresponding densities from 1.9/cm3 to 3.8 g/cm3. For example, a 6.5-m diameter asteroid with a density of 2.8 g/cm3 has a mass of order 4,00,000 kg. If this Asteroid falls on earth then the earth will be destroyed at when the equally of inclination angle both of earth and Asteroid. My proposed work is how we can avert this great danger for near feature the above mass of Asteroid. The present work is Simulation and Prototype Design of a Variable Step Angle Techniques Based Asteroid Deflection for future planetary Mission. Proposed method is comparing with previous method that will be very useful to achieving hit the ion velocity to asteroid surface in several direction at static position of Asteroid deviate mission[ADM].The deviate angle α1 to α2 with help of Variable step angle techniques, it is containing Stepper Motor with attach of Ion propulsion module system.VASAT module is locating the top edge on the three axis stabilized Method in ADM.The three axis stabilized method is including the devices are like Gyroscope sensor ,Arduino Microcontroller system and ion propulsion techniques. Arduino Microcontroller system determines the orientation from gyroscope sensor. Then it uses ion Propulsion techniques modules to control the required motion like pitch, yaw and roll attitude of the ADM. The exhaust thrust value is 1500 mN and velocity is 10,000 m/s [from simulation results but experimental output results is small because low quality of Components is used in research lab] .The propulsion techniques also used as a static position of ADM Mission [both

  9. Enhancements of Tow-Steering Design Techniques: Design of Rectangular Panel Under Combined Loads

    NASA Technical Reports Server (NTRS)

    Tatting, Brian F.; Setoodeh, Shahriar; Gurdal, Zafer

    2005-01-01

    An extension to existing design tools that utilize tow-steering is presented which is used to investigate the use of elastic tailoring for a flat panel with a central hole under combined loads of compression and shear. The elastic tailoring is characterized by tow-steering within individual lamina as well as a novel approach based on selective reinforcement, which attempts to minimize compliance through the use of Cellular Automata design concepts. The selective reinforcement designs lack any consideration of manufacturing constraints, so a new tow-steered path definition was developed to translate the prototype selective reinforcement designs into manufacturable plies. The minimum weight design of a flat panel under combined loading was based on a model provided by NASA-Langley personnel and analyzed by STAGS within the OLGA design environment. Baseline designs using traditional straight fiber plies were generated, as well as tow-steered designs which incorporated parallel, tow-drop, and overlap plies within the laminate. These results indicated that the overlap method provided the best improvement with regards to weight and performance as compared to traditional constant stiffness monocoque panels, though the laminates did not measure up to similar designs from the literature using sandwich and isogrid constructions. Further design studies were conducted using various numbers of the selective reinforcement plies at the core and outer surface of the laminate. None of these configurations exhibited notable advantages with regard to weight or buckling performance. This was due to the fact that the minimization of the compliance tended to direct the major stresses toward the center of the panel, which decreased the ability of the structure to withstand loads leading to instability.

  10. Determination of calibration constants for the hole-drilling residual stress measurement technique applied to orthotropic composites. II - Experimental evaluations

    NASA Technical Reports Server (NTRS)

    Prasad, C. B.; Prabhakaran, R.; Tompkins, S.

    1987-01-01

    The first step in the extension of the semidestructive hole-drilling technique for residual stress measurement to orthotropic composite materials is the determination of the three calibration constants. Attention is presently given to an experimental determination of these calibration constants for a highly orthotropic, unidirectionally-reinforced graphite fiber-reinforced polyimide composite. A comparison of the measured values with theoretically obtained ones shows agreement to be good, in view of the many possible sources of experimental variation.

  11. An experimental evaluation of error seeding as a program validation technique

    NASA Technical Reports Server (NTRS)

    Knight, J. C.; Ammann, P. E.

    1985-01-01

    A previously reported experiment in error seeding as a program validation technique is summarized. The experiment was designed to test the validity of three assumptions on which the alleged effectiveness of error seeding is based. Errors were seeded into 17 functionally identical but independently programmed Pascal programs in such a way as to produce 408 programs, each with one seeded error. Using mean time to failure as a metric, results indicated that it is possible to generate seeded errors that are arbitrarily but not equally difficult to locate. Examination of indigenous errors demonstrated that these are also arbitrarily difficult to locate. These two results support the assumption that seeded and indigenous errors are approximately equally difficult to locate. However, the assumption that, for each type of error, all errors are equally difficult to locate was not borne out. Finally, since a seeded error occasionally corrected an indigenous error, the assumption that errors do not interfere with each other was proven wrong. Error seeding can be made useful by taking these results into account in modifying the underlying model.

  12. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique

    PubMed Central

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)2 solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  13. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-09-08

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete.

  14. Experimental Study on Rebar Corrosion Using the Galvanic Sensor Combined with the Electronic Resistance Technique.

    PubMed

    Xu, Yunze; Li, Kaiqiang; Liu, Liang; Yang, Lujia; Wang, Xiaona; Huang, Yi

    2016-01-01

    In this paper, a new kind of carbon steel (CS) and stainless steel (SS) galvanic sensor system was developed for the study of rebar corrosion in different pore solution conditions. Through the special design of the CS and SS electronic coupons, the electronic resistance (ER) method and zero resistance ammeter (ZRA) technique were used simultaneously for the measurement of both the galvanic current and the corrosion depth. The corrosion processes in different solution conditions were also studied by linear polarization resistance (LPR) and the measurements of polarization curves. The test result shows that the galvanic current noise can provide detailed information of the corrosion processes. When localized corrosion occurs, the corrosion rate measured by the ER method is lower than the real corrosion rate. However, the value measured by the LPR method is higher than the real corrosion rate. The galvanic current and the corrosion current measured by the LPR method shows linear correlation in chloride-containing saturated Ca(OH)₂ solution. The relationship between the corrosion current differences measured by the CS electronic coupons and the galvanic current between the CS and SS electronic coupons can also be used to evaluate the localized corrosion in reinforced concrete. PMID:27618054

  15. Longitudinal mixing in meandering channels: new experimental data set and verification of a predictive technique.

    PubMed

    Boxall, J B; Guymer, I

    2007-01-01

    Evaluation of longitudinal mixing processes in open channel flows is important in environmental management, requiring the quantification of mixing coefficients. Estimates of these coefficients sufficiently accurate for environmental impact assessments cannot be achieved using current theoretical or semi-empirical methods for natural channels. This inaccuracy is caused by a limited understanding and quantification of the interaction of the dominant mechanisms resulting from natural channel features, such as plan form curvature and changes in cross-sectional shape. Experimental results are presented here from studies conducted in three self-formed channels, developed by known discharges. Longitudinal mixing was investigated at various flow rates within each of the channels by monitoring the development of tracer plumes during transit through the channels. Using an optimisation procedure, coefficients required for solution of the one-dimensional advection dispersion equation (1D-ADE) were found in the range 0.02-0.2m(2)/s. The coefficients were found to vary as functions of longitudinal meander location, channel form and discharge. Predictions of these longitudinal mixing coefficients were made using a mathematical technique requiring only channel form properties and flow rate as inputs. Predicted values were typically within 20% of the measured values, although deviation of up to 50% was found for the lowest discharge in each channel. This large error is likely to have been caused by increased dead zone effects associated with channel bathymetry at low discharges that are not captured by the method. The method was shown to be capable of capturing the variation in the longitudinal mixing coefficient with longitudinal meander location, with channel form and with discharge.

  16. A Sino-Finnish Initiative for Experimental Teaching Practices Using the Design Factory Pedagogical Platform

    ERIC Educational Resources Information Center

    Björklund, Tua A.; Nordström, Katrina M.; Clavert, Maria

    2013-01-01

    The paper presents a Sino-Finnish teaching initiative, including the design and experiences of a series of pedagogical workshops implemented at the Aalto-Tongji Design Factory (DF), Shanghai, China, and the experimentation plans collected from the 54 attending professors and teachers. The workshops aimed to encourage trying out interdisciplinary…

  17. Experimental Design for Local School Districts (July 18-August 26, 1966). Final Report.

    ERIC Educational Resources Information Center

    Norton, Daniel P.

    A 6-week summer institute on experimental design was conducted for public school personnel who had been designated by their school administrations as having responsibility for research together with some time released for devotion to research. Of the 32, 17 came from Indiana, 15 from 12 other states. Lectures on statistical principles of design…

  18. Assessing the Effectiveness of a Computer Simulation for Teaching Ecological Experimental Design

    ERIC Educational Resources Information Center

    Stafford, Richard; Goodenough, Anne E.; Davies, Mark S.

    2010-01-01

    Designing manipulative ecological experiments is a complex and time-consuming process that is problematic to teach in traditional undergraduate classes. This study investigates the effectiveness of using a computer simulation--the Virtual Rocky Shore (VRS)--to facilitate rapid, student-centred learning of experimental design. We gave a series of…

  19. Using a Discussion about Scientific Controversy to Teach Central Concepts in Experimental Design

    ERIC Educational Resources Information Center

    Bennett, Kimberley Ann

    2015-01-01

    Students may need explicit training in informal statistical reasoning in order to design experiments or use formal statistical tests effectively. By using scientific scandals and media misinterpretation, we can explore the need for good experimental design in an informal way. This article describes the use of a paper that reviews the measles mumps…

  20. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological…

  1. Exploiting Distance Technology to Foster Experimental Design as a Neglected Learning Objective in Labwork in Chemistry

    ERIC Educational Resources Information Center

    d'Ham, Cedric; de Vries, Erica; Girault, Isabelle; Marzin, Patricia

    2004-01-01

    This paper deals with the design process of a remote laboratory for labwork in chemistry. In particular, it focuses on the mutual dependency of theoretical conjectures about learning in the experimental sciences and technological opportunities in creating learning environments. The design process involves a detailed analysis of the expert task and…

  2. Scaffolding a Complex Task of Experimental Design in Chemistry with a Computer Environment

    ERIC Educational Resources Information Center

    Girault, Isabelle; d'Ham, Cédric

    2014-01-01

    When solving a scientific problem through experimentation, students may have the responsibility to design the experiment. When students work in a conventional condition, with paper and pencil, the designed procedures stay at a very general level. There is a need for additional scaffolds to help the students perform this complex task. We propose a…

  3. The application of analysis of variance (ANOVA) to different experimental designs in optometry.

    PubMed

    Armstrong, R A; Eperjesi, F; Gilmartin, B

    2002-05-01

    Analysis of variance (ANOVA) is the most efficient method available for the analysis of experimental data. Analysis of variance is a method of considerable complexity and subtlety, with many different variations, each of which applies in a particular experimental context. Hence, it is possible to apply the wrong type of ANOVA to data and, therefore, to draw an erroneous conclusion from an experiment. This article reviews the types of ANOVA most likely to arise in clinical experiments in optometry including the one-way ANOVA ('fixed' and 'random effect' models), two-way ANOVA in randomised blocks, three-way ANOVA, and factorial experimental designs (including the varieties known as 'split-plot' and 'repeated measures'). For each ANOVA, the appropriate experimental design is described, a statistical model is formulated, and the advantages and limitations of each type of design discussed. In addition, the problems of non-conformity to the statistical model and determination of the number of replications are considered.

  4. Key techniques and applications of adaptive growth method for stiffener layout design of plates and shells

    NASA Astrophysics Data System (ADS)

    Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun

    2013-11-01

    The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.

  5. All optical experimental design for neuron excitation, inhibition, and action potential detection

    NASA Astrophysics Data System (ADS)

    Walsh, Alex J.; Tolstykh, Gleb; Martens, Stacey; Sedelnikova, Anna; Ibey, Bennett L.; Beier, Hope T.

    2016-03-01

    Recently, infrared light has been shown to both stimulate and inhibit excitatory cells. However, studies of infrared light for excitatory cell inhibition have been constrained by the use of invasive and cumbersome electrodes for cell excitation and action potential recording. Here, we present an all optical experimental design for neuronal excitation, inhibition, and action potential detection. Primary rat neurons were transfected with plasmids containing the light sensitive ion channel CheRiff. CheRiff has a peak excitation around 450 nm, allowing excitation of transfected neurons with pulsed blue light. Additionally, primary neurons were transfected with QuasAr2, a fast and sensitive fluorescent voltage indicator. QuasAr2 is excited with yellow or red light and therefore does not spectrally overlap CheRiff, enabling imaging and action potential activation, simultaneously. Using an optic fiber, neurons were exposed to blue light sequentially to generate controlled action potentials. A second optic fiber delivered a single pulse of 1869nm light to the neuron causing inhibition of the evoked action potentials (by the blue light). When used in concert, these optical techniques enable electrode free neuron excitation, inhibition, and action potential recording, allowing research into neuronal behaviors with high spatial fidelity.

  6. Optimum Design of Aluminum Beverage Can Ends Using Structural Optimization Techniques

    SciTech Connect

    Yamazaki, Koetsu; Watanabe, Masato; Itoh, Ryouiti; Han, Jing; Nishiyama, Sadao

    2005-08-05

    This paper has tried to apply the response surface approximate method in the structural optimization techniques to develop aluminum beverage can ends. Geometrical parameters of the end shell are selected as design variables. The analysis points in the design space are assigned using an orthogonal array in the design-of-experiment technique. Finite element analysis code is used to simulate the deforming behavior and to calculate buckling strength and central panel displacement of the end shell under internal pressure. On the basis of the numerical analysis results, the response surface of the buckling strength and panel growth are approximated in terms of the design variables. By using a numerical optimization program, the weight of the end shell is minimized subject to constraints of the buckling strength, panel growth suppression and other design requirements. A numerical example on 202 end shell optimization problem has been shown in this paper.

  7. Combining simulaton techniques and design expertise in a renewable energy system design package, RESSAD

    SciTech Connect

    Jennings, S.U.; Pryor, T.L.; Remmer, D.P.

    1996-10-01

    Computer simulation is an increasingly popular tool for determining the most suitable renewable energy system type, design and control for an isolated community or homestead. However for the user without any expertise in system design, the complicated process of system component and control selection using computer simulation takes on a trial and error approach. Our renewable energy system design package, RESSAD, has been developed to simulate a wide range of renewable power supply systems, and to go beyond system simulation, by combining design expertise with the simulation model. The knowledge of the system designer is incorporated into the package through a range of analysis tools that assist in the selection process, without removing or restricting individual choices. The system selection process is analysed from the early stages of renewable resource assessment to the final evaluation of the results from a simulation of the chosen system. The approach of the RESSAD package in this selection process is described and its use is illustrated by two case studies in Western Australia. 11 refs., 3 tabs.

  8. Experimental techniques for measuring Rayleigh-Taylor instability in inertial confinement fusion (ICF)

    SciTech Connect

    Smalyuk, V A

    2012-06-07

    Rayleigh-Taylor (RT) instability is one of the major concerns in inertial confinement fusion (ICF) because it amplifies target modulations in both acceleration and deceleration phases of implosion, which leads to shell disruption and performance degradation of imploding targets. This article reviews experimental results of the RT growth experiments performed on OMEGA laser system, where targets were driven directly with laser light. RT instability was studied in the linear and nonlinear regimes. The experiments were performed in acceleration phase, using planar and spherical targets, and in deceleration phase of spherical implosions, using spherical shells. Initial target modulations consisted of 2-D pre-imposed modulations, and 2-D and 3-D modulations imprinted on targets by the non-uniformities in laser drive. In planar geometry, the nonlinear regime was studied using 3-D modulations with broadband spectra near nonlinear saturation levels. In acceleration-phase, the measured modulation Fourier spectra and nonlinear growth velocities are in good agreement with those predicted by Haan's model [Haan S W 1989 Phys. Rev. A 39 5812]. In a real-space analysis, the bubble merger was quantified by a self-similar evolution of bubble size distributions [Oron D et al 2001 Phys. Plasmas 8, 2883]. The 3-D, inner-surface modulations were measured to grow throughout the deceleration phase of spherical implosions. RT growth rates are very sensitive to the drive conditions, therefore they can be used to test and validate drive physics in hydrodynamic codes used to design ICF implosions. Measured growth rates of pre-imposed 2-D target modulations below nonlinear saturation levels were used to validate non-local thermal electron transport model in laser-driven experiments.

  9. Rotor burst protection program: Experimentation to provide guidelines for the design of turbine rotor burst fragment containment rings

    NASA Technical Reports Server (NTRS)

    Mangano, G. J.; Salvino, J. T.; Delucia, R. A.

    1977-01-01

    Empirical guidelines for the design of minimum weight turbine rotor disk fragment containment rings made from a monolithic metal were generated by experimentally establishing the relationship between a variable that provides a measure of containment ring capability and several other variables that both characterized the configurational aspects of the rotor fragments and containment ring, and had been found from exploratory testing to have had significant influence on the containment process. Test methodology and data analysis techniques are described. Results are presented in graphs and tables.

  10. A Computational/Experimental Study of Two Optimized Supersonic Transport Designs and the Reference H Baseline

    NASA Technical Reports Server (NTRS)

    Cliff, Susan E.; Baker, Timothy J.; Hicks, Raymond M.; Reuther, James J.

    1999-01-01

    Two supersonic transport configurations designed by use of non-linear aerodynamic optimization methods are compared with a linearly designed baseline configuration. One optimized configuration, designated Ames 7-04, was designed at NASA Ames Research Center using an Euler flow solver, and the other, designated Boeing W27, was designed at Boeing using a full-potential method. The two optimized configurations and the baseline were tested in the NASA Langley Unitary Plan Supersonic Wind Tunnel to evaluate the non-linear design optimization methodologies. In addition, the experimental results are compared with computational predictions for each of the three configurations from the Enter flow solver, AIRPLANE. The computational and experimental results both indicate moderate to substantial performance gains for the optimized configurations over the baseline configuration. The computed performance changes with and without diverters and nacelles were in excellent agreement with experiment for all three models. Comparisons of the computational and experimental cruise drag increments for the optimized configurations relative to the baseline show excellent agreement for the model designed by the Euler method, but poorer comparisons were found for the configuration designed by the full-potential code.

  11. Designing for Damage: Robust Flight Control Design using Sliding Mode Techniques

    NASA Technical Reports Server (NTRS)

    Vetter, T. K.; Wells, S. R.; Hess, Ronald A.; Bacon, Barton (Technical Monitor); Davidson, John (Technical Monitor)

    2002-01-01

    A brief review of sliding model control is undertaken, with particular emphasis upon the effects of neglected parasitic dynamics. Sliding model control design is interpreted in the frequency domain. The inclusion of asymptotic observers and control 'hedging' is shown to reduce the effects of neglected parasitic dynamics. An investigation into the application of observer-based sliding mode control to the robust longitudinal control of a highly unstable is described. The sliding mode controller is shown to exhibit stability and performance robustness superior to that of a classical loop-shaped design when significant changes in vehicle and actuator dynamics are employed to model airframe damage.

  12. Evaluation of the Doppler technique for fat emboli detection in an experimental flow model.

    PubMed

    Wikstrand, Victoria; Linder, Nadja; Engström, Karl Gunnar

    2008-09-01

    Pericardial suction blood (PSB) is known to be contaminated with fat droplets, which may cause embolic brain damage during cardiopulmonary bypass (CPB). This study aimed to investigate the possibility to detect fat emboli by a Doppler technique. An in vitro flow model was designed, with a main pump, a filter, a reservoir, and an injector. A Hatteland Doppler probe was attached to the circulation loop to monitor particle counts and their size distribution. Suspended soya oil or heat-extracted human wound fat was analyzed in the model. The concentrations of these fat emboli were calibrated to simulate clinical conditions with either a continuous return of PSB to the systemic circulation or when PSB was collected for rapid infusion at CPB weaning. For validation purpose, air and solid emboli were also analyzed. Digital image analysis was performed to characterize the nature of the tested emboli. With soya suspension, there was an apparent dose response between Doppler counts and the nominal fat concentration. This pattern was seen for computed Doppler output (p = .037) but not for Doppler raw counts (p = .434). No correlation was seen when human fat suspensions were tested. Conversely, the image analysis showed an obvious relationship between microscopy particle count and the nominal fat concentration (p < .001). However, the scatter plot between image analysis counting and Doppler recordings showed a random distribution (p = .873). It was evident that the Doppler heavily underestimated the true number of injected fat emboli. When the image analysis data were subdivided into diameter intervals, it was discovered that the few large-size droplets accounted for a majority of total fat volume compared with the numerous small-size particles (< 10 microm). Our findings strongly suggest that the echogenecity of fat droplets is insufficient for detection by means of the tested Doppler method. PMID:18853829

  13. Integrated flight/propulsion control design for a STOVL aircraft using H-infinity control design techniques

    NASA Technical Reports Server (NTRS)

    Garg, Sanjay; Ouzts, Peter J.

    1991-01-01

    Results are presented from an application of H-infinity control design methodology to a centralized integrated flight propulsion control (IFPC) system design for a supersonic Short Takeoff and Vertical Landing (STOVL) fighter aircraft in transition flight. The emphasis is on formulating the H-infinity control design problem such that the resulting controller provides robustness to modeling uncertainties and model parameter variations with flight condition. Experience gained from a preliminary H-infinity based IFPC design study performed earlier is used as the basis to formulate the robust H-infinity control design problem and improve upon the previous design. Detailed evaluation results are presented for a reduced order controller obtained from the improved H-infinity control design showing that the control design meets the specified nominal performance objectives as well as provides stability robustness for variations in plant system dynamics with changes in aircraft trim speed within the transition flight envelope. A controller scheduling technique which accounts for changes in plant control effectiveness with variation in trim conditions is developed and off design model performance results are presented.

  14. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design

    PubMed Central

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges. PMID:27458364

  15. Integration of artificial intelligence and numerical optimization techniques for the design of complex aerospace systems

    SciTech Connect

    Tong, S.S.; Powell, D.; Goel, S. GE Consulting Services, Albany, NY )

    1992-02-01

    A new software system called Engineous combines artificial intelligence and numerical methods for the design and optimization of complex aerospace systems. Engineous combines the advanced computational techniques of genetic algorithms, expert systems, and object-oriented programming with the conventional methods of numerical optimization and simulated annealing to create a design optimization environment that can be applied to computational models in various disciplines. Engineous has produced designs with higher predicted performance gains that current manual design processes - on average a 10-to-1 reduction of turnaround time - and has yielded new insights into product design. It has been applied to the aerodynamic preliminary design of an aircraft engine turbine, concurrent aerodynamic and mechanical preliminary design of an aircraft engine turbine blade and disk, a space superconductor generator, a satellite power converter, and a nuclear-powered satellite reactor and shield. 23 refs.

  16. Systematic design of output filters for audio class-D amplifiers via Simplified Real Frequency Technique

    NASA Astrophysics Data System (ADS)

    Hintzen, E.; Vennemann, T.; Mathis, W.

    2014-11-01

    In this paper a new filter design concept is proposed and implemented which takes into account the complex loudspeaker impedance. By means of techniques of broadband matching, that has been successfully applied in radio technology, we are able to optimize the reconstruction filter to achieve an overall linear frequency response. Here, a passive filter network is inserted between source and load that matches the complex load impedance to the complex source impedance within a desired frequency range. The design and calculation of the filter is usually done using numerical approximation methods which are known as Real Frequency Techniques (RFT). A first approach to systematic design of reconstruction filters for class-D amplifiers is proposed, using the Simplified Real Frequency Technique (SRFT). Some fundamental considerations are introduced as well as the benefits and challenges of impedance matching between class-D amplifiers and loudspeakers. Current simulation data using MATLAB is presented and supports some first conclusions.

  17. Study on the Ring Type Stator Design Technique for a Traveling Wave Rotary Type Ultrasonic Motor

    NASA Astrophysics Data System (ADS)

    Oh, Jin-Heon; Yuk, Hyung-Sang; Lim, Kee-Joe

    2012-09-01

    In this paper, the technique of design for the stator of traveling wave rotary type ultrasonic motor was proposed. To establish the design technique, the distribution of internal stresses of the stator was analyzed by applying the cylindrical bodies contact model of Hertz theory and the concept of “horn effect” was used to consider the influence of the projection structure. To verify the proposed technique, the prototype motor was fabricated on the authority of the projection shape dimension and the design specification. And its performance was evaluated. According to the estimate production of the experiment results using the extrapolation, we confirmed that the values obtained through the verification experiment were similar to those deduced by the proposed method properly.

  18. Experimental determination of Grunieisen gamma for two dissimilar materials (PEEK and Al 5083) via the shock-reverberation technique

    NASA Astrophysics Data System (ADS)

    Roberts, Andrew; Appleby-Thomas, Gareth; Hazell, Paul

    2011-06-01

    Following multiple loading events the resultant shock state of a material will lie away from the principle Hugoniot. Prediction of such states requires knowledge of a materials equation-of-state. The material-specific variable Grunieisen gamma (Γ) defines the shape of ``off-Hugoniot'' points in energy-volume-pressure space. Experimentally the shock-reverberation technique (based on the principle of impedance-matching) has previously allowed estimation of the first-order Grunieisen gamma term (Γ1) for a silicone elastomer. Here, this approach was employed to calculate Γ1 for two dissimilar materials, Polyether ether ketone (PEEK) and the armour-grade aluminium alloy 5083 (H32); thereby allowing discussion of limitations of this technique in the context of plate-impact experiments employing Manganin stress gauges. Finally, the experimentally determined values for Γ1 were further refined by comparison between experimental records and numerical simulations carried out using the commercial code ANYSYS Autodyn®.

  19. Euromech 260: Advanced non-intrusive experimental techniques in fluid and plasma flows

    NASA Astrophysics Data System (ADS)

    The following topics are discussed: coherent anti-Stokes and elastic Rayleigh scattering; elastic scattering and non linear dynamics; fluorescence; molecular tracking techniques and particle image velocimetry.

  20. A system identification technique based on the random decrement signatures. Part 2: Experimental results

    NASA Technical Reports Server (NTRS)

    Bedewi, Nabih E.; Yang, Jackson C. S.

    1987-01-01

    Identification of the system parameters of a randomly excited structure may be treated using a variety of statistical techniques. Of all these techniques, the Random Decrement is unique in that it provides the homogeneous component of the system response. Using this quality, a system identification technique was developed based on a least-squares fit of the signatures to estimate the mass, damping, and stiffness matrices of a linear randomly excited system. The results of an experiment conducted on an offshore platform scale model to verify the validity of the technique and to demonstrate its application in damage detection are presented.

  1. Experimental validation of optimization-based integrated controls-structures design methodology for flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Joshi, Suresh M.; Walz, Joseph E.

    1993-01-01

    An optimization-based integrated design approach for flexible space structures is experimentally validated using three types of dissipative controllers, including static, dynamic, and LQG dissipative controllers. The nominal phase-0 of the controls structure interaction evolutional model (CEM) structure is redesigned to minimize the average control power required to maintain specified root-mean-square line-of-sight pointing error under persistent disturbances. The redesign structure, phase-1 CEM, was assembled and tested against phase-0 CEM. It is analytically and experimentally demonstrated that integrated controls-structures design is substantially superior to that obtained through the traditional sequential approach. The capability of a software design tool based on an automated design procedure in a unified environment for structural and control designs is demonstrated.

  2. Experimental hydrogen-fueled automotive engine design data-base project. Volume 2. Main technical report

    SciTech Connect

    Swain, M.R.; Adt, R.R. Jr.; Pappas, J.M.

    1983-05-01

    Operational performance and emissions characteristics of hydrogen-fueled engines are reviewed. The project activities are reviewed including descriptions of the test engine and its components, the test apparatus, experimental techniques, experiments performed and the results obtained. Analyses of other hydrogen engine project data are also presented and compared with the results of the present effort.

  3. Experimental research on No-oil ignition technique of pulverized coal/coal-water-slurry

    SciTech Connect

    Zhou Zhijun; Fan Haojie; Tu Jianhua

    1997-07-01

    With new coal-fired boilers going into operation and widespread application of substitute-oil fuel such as Coal-Water-Slurry, many oil-fired boiler may stop firing oil. But the ignition of coal-fired boilers stabilizing combustion under low load also need a large amount of oil. Information show that it will consume 5t for a 50MW unit boiler to start one time and for a 125NM unit, 15t oil will be consumed. It will consume 50t oil for a 200NM unit boiler to start one time and 1000t/year on stabilizing combustion. A 600MW unit, according to information from USA, will consume 300t oil to start one time, and 23300t oil are needed for one year. So, the amount of oil used to ignite coal and stabilize combustion are very considerable. Due to attaching importance to conserving oil, novel ignition and stabilizing techniques (such as pulverized coal pre-combustion chamber technique, blunt body burner, boat-shaped burner, great-velocity-difference combustion stabilizing technique, dense-thin phase combustion stabilizing technique and plasma ignition technique) are come out these ten years, and oil consumption for ignition and stabilizing are decreased greatly. Among them, only plasma ignition technique is a kind of ignition technique without oil. Although the others can conserve a large amount of oil during ignition and low load condition, total oil consumption are still very considerable. And plasma ignition technique is not adapt to coal-water-slurry ignition. Therefore, this paper presents a novel ignition technique: electrical thermal chamber ignition technique adapting pulverized coal (PC) and coal-water-slurry (CWS), which absorbs the advantage of pre-combustion chamber technique and does not consume oil.

  4. Research in advanced formal theorem-proving techniques. [design and implementation of computer languages

    NASA Technical Reports Server (NTRS)

    Raphael, B.; Fikes, R.; Waldinger, R.

    1973-01-01

    The results are summarised of a project aimed at the design and implementation of computer languages to aid in expressing problem solving procedures in several areas of artificial intelligence including automatic programming, theorem proving, and robot planning. The principal results of the project were the design and implementation of two complete systems, QA4 and QLISP, and their preliminary experimental use. The various applications of both QA4 and QLISP are given.

  5. Fuzzy Controller Design Using Evolutionary Techniques for Twin Rotor MIMO System: A Comparative Study

    PubMed Central

    Hashim, H. A.; Abido, M. A.

    2015-01-01

    This paper presents a comparative study of fuzzy controller design for the twin rotor multi-input multioutput (MIMO) system (TRMS) considering most promising evolutionary techniques. These are gravitational search algorithm (GSA), particle swarm optimization (PSO), artificial bee colony (ABC), and differential evolution (DE). In this study, the gains of four fuzzy proportional derivative (PD) controllers for TRMS have been optimized using the considered techniques. The optimization techniques are developed to identify the optimal control parameters for system stability enhancement, to cancel high nonlinearities in the model, to reduce the coupling effect, and to drive TRMS pitch and yaw angles into the desired tracking trajectory efficiently and accurately. The most effective technique in terms of system response due to different disturbances has been investigated. In this work, it is observed that GSA is the most effective technique in terms of solution quality and convergence speed. PMID:25960738

  6. Experimental techniques for determination of the role of diffusion and convection in crystal growth from solution

    NASA Technical Reports Server (NTRS)

    Zefiro, L.

    1980-01-01

    Various studies of the concentration of the solution around a growing crystal using interferometric techniques are reviewed. A holographic interferometric technique used in laboratory experiments shows that a simple description of the solution based on the assumption of a purely diffusive mechanism appears inadequate since the convection, effective even in reduced columns, always affects the growth.

  7. Cost-Optimal Design of a 3-Phase Core Type Transformer by Gradient Search Technique

    NASA Astrophysics Data System (ADS)

    Basak, R.; Das, A.; Sensarma, A. K.; Sanyal, A. N.

    2014-04-01

    3-phase core type transformers are extensively used as power and distribution transformers in power system and their cost is a sizable proportion of the total system cost. Therefore they should be designed cost-optimally. The design methodology for reaching cost-optimality has been discussed in details by authors like Ramamoorty. It has also been discussed in brief in some of the text-books of electrical design. The paper gives a method for optimizing design, in presence of constraints specified by the customer and the regulatory authorities, through gradient search technique. The starting point has been chosen within the allowable parameter space the steepest decent path has been followed for convergence. The step length has been judiciously chosen and the program has been maneuvered to avoid local minimal points. The method appears to be best as its convergence is quickest amongst different optimizing techniques.

  8. Experimental design for stable genetic manipulation in mammalian cell lines: lentivirus and alternatives.

    PubMed

    Shearer, Robert F; Saunders, Darren N

    2015-01-01

    The use of third-generation lentiviral vectors is now commonplace in most areas of basic biology. These systems provide a fast, efficient means for modulating gene expression, but experimental design needs to be carefully considered to minimize potential artefacts arising from off-target effects and other confounding factors. This review offers a starting point for those new to lentiviral-based vector systems, addressing the main issues involved with the use of lentiviral systems in vitro and outlines considerations which should be taken into account during experimental design. Factors such as selecting an appropriate system and controls, and practical titration of viral transduction are important considerations for experimental design. We also briefly describe some of the more recent advances in genome editing technology. TALENs and CRISPRs offer an alternative to lentivirus, providing endogenous gene editing with reduced off-target effects often at the expense of efficiency.

  9. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Technical Reports Server (NTRS)

    Ippolito, L. J.; Kaul, R. D.; Wallace, R. G.

    1983-01-01

    This Propagation Handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in some detail, in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. In order to make the Handbook readily usable to many engineers, it has been arranged in two parts. Chapters 2-5 comprise the descriptive part. They deal in some detail with rain systems, rain and attenuation models, depolarization and experimental data. Chapters 6 and 7 make up the design part of the Handbook and may be used almost independently of the earlier chapters. In Chapter 6, the design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. Chapter 7 addresses the questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results.

  10. Propagation effects handbook for satellite systems design. A summary of propagation impairments on 10 to 100 GHz satellite links with techniques for system design

    NASA Astrophysics Data System (ADS)

    Ippolito, L. J.; Kaul, R. D.; Wallace, R. G.

    1983-06-01

    This Propagation Handbook provides satellite system engineers with a concise summary of the major propagation effects experienced on Earth-space paths in the 10 to 100 GHz frequency range. The dominant effect, attenuation due to rain, is dealt with in some detail, in terms of both experimental data from measurements made in the U.S. and Canada, and the mathematical and conceptual models devised to explain the data. In order to make the Handbook readily usable to many engineers, it has been arranged in two parts. Chapters 2-5 comprise the descriptive part. They deal in some detail with rain systems, rain and attenuation models, depolarization and experimental data. Chapters 6 and 7 make up the design part of the Handbook and may be used almost independently of the earlier chapters. In Chapter 6, the design techniques recommended for predicting propagation effects in Earth-space communications systems are presented. Chapter 7 addresses the questions of where in the system design process the effects of propagation should be considered, and what precautions should be taken when applying the propagation results.

  11. Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  12. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy J

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new "experimentation assessments," 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines. PMID:27146159

  13. Presentation of clinical laboratory results: an experimental comparison of four visualization techniques

    PubMed Central

    Torsvik, Torbjørn; Lillebo, Børge; Mikkelsen, Gustav

    2013-01-01

    Objective To evaluate how clinical chemistry test results were assessed by volunteers when presented with four different visualization techniques. Materials and methods A total of 20 medical students reviewed quantitative test results from 4 patients using 4 different visualization techniques in a balanced, crossover experiment. The laboratory data represented relevant patient categories, including simple, emergency, chronic and complex patients. Participants answered questions about trend, overall levels and covariation of test results. Answers and assessment times were recorded and participants were interviewed on their preference of visualization technique. Results Assessment of results and the time used varied between visualization techniques. With sparklines and relative multigraphs participants made faster assessments. With relative multigraphs participants identified more covarying test results. With absolute multigraphs participants found more trends. With sparklines participants more often assessed laboratory results to be within reference ranges. Different visualization techniques were preferred for the four different patient categories. No participant preferred absolute multigraphs for any patient. Discussion Assessments of clinical chemistry test results were influenced by how they were presented. Importantly though, this association depended on the complexity of the result sets, and none of the visualization techniques appeared to be ideal in all settings. Conclusions Sparklines and relative multigraphs seem to be favorable techniques for presenting complex long-term clinical chemistry test results, while tables seem to suffice for simpler result sets. PMID:23043123

  14. Design and control of energy efficient food drying processes with specific reference to quality; Model development and experimental studies: Moisture movement and dryer design

    SciTech Connect

    Kim, M.; Litchfield, B.; Singh, R.; Liang, H.; Narsimhan, G.; Waananen, K.

    1989-08-01

    The ultimate goal of the project is to develop procedures, techniques, data and other information that will aid in the design of cost effective and energy efficient drying processes that produce high quality foods. This objective has been sought by performing studies to determine the pertinent properties of food products, by developing models to describe the fundamental phenomena of food drying and by testing the models at laboratory scale. Finally, this information is used to develop recommendations and strategies for improved dryer design and control. This volume, Model Development and Experimental Studies, emphasizes the direct and indirect drying processes. An extensive literature review identifies key characteristics of drying models including controlling process resistances, internal mechanisms of moisture movement, structural and thermodynamic assumptions, and methods of model coefficients and material property measurement/determination, model solution, and model validation. Similarities and differences between previous work are noted, and strategies for future drying model development are suggested.

  15. Perspectives on Prediction Variance and Bias in Developing, Assessing, and Comparing Experimental Designs

    SciTech Connect

    Piepel, Gregory F.

    2010-12-01

    The vast majority of response surface methods used in practice to develop, assess, and compare experimental designs focus on variance properties of designs. Because response surface models only approximate the true unknown relationships, models are subject to bias errors as well as variance errors. Beginning with the seminal paper of Box and Draper (1959) and over the subsequent 50 years, methods that consider bias and mean-squared-error (variance and bias) properties of designs have been presented in the literature. However, these methods are not widely implemented in software and are not routinely used to develop, assess, and compare experimental designs in practice. Methods for developing, assessing, and comparing response surface designs that account for variance properties are reviewed. Brief synopses of publications that consider bias or mean-squared-error properties are provided. The difficulties and approaches for addressing bias properties of designs are summarized. Perspectives on experimental design methods that account for bias and/or variance properties and on future needs are presented.

  16. DESIGN NOTE: A preliminary study on temperature change monitoring using the MR current density imaging technique

    NASA Astrophysics Data System (ADS)

    Khang, H. S.; Oh, S. H.; Han, B. H.; Lee, S. Y.; Cho, M. H.; Woo, E. J.

    2002-04-01

    Based on the fact that the electrical impedance of biological tissues is very sensitive to temperature, we have proposed a method to monitor local temperature changes inside the tissues. Using an analytic model and a finite element method model, we have analysed the effect of the local temperature change on the phase image obtained by the magnetic resonance current density imaging technique. We show preliminary experimental results of the temperature change monitoring performed with a 0.3 T magnetic resonance imaging system. We expect that the proposed method can be utilized for the development of non-invasive temperature imaging techniques.

  17. Experimental Design and Data collection of a finishing end milling operation of AISI 1045 steel

    PubMed Central

    Dias Lopes, Luiz Gustavo; de Brito, Tarcísio Gonçalves; de Paiva, Anderson Paulo; Peruchi, Rogério Santana; Balestrassi, Pedro Paulo

    2016-01-01

    In this Data in Brief paper, a central composite experimental design was planned to collect the surface roughness of an end milling operation of AISI 1045 steel. The surface roughness values are supposed to suffer some kind of variation due to the action of several factors. The main objective here was to present a multivariate experimental design and data collection including control factors, noise factors, and two correlated responses, capable of achieving a reduced surface roughness with minimal variance. Lopes et al. (2016) [1], for example, explores the influence of noise factors on the process performance. PMID:26909374

  18. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1992-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  19. A comparison of the calculated and experimental off-design performance of a radial flow turbine

    NASA Technical Reports Server (NTRS)

    Tirres, Lizet

    1991-01-01

    Off design aerodynamic performance of the solid version of a cooled radial inflow turbine is analyzed. Rotor surface static pressure data and other performance parameters were obtained experimentally. Overall stage performance and turbine blade surface static to inlet total pressure ratios were calculated by using a quasi-three dimensional inviscid code. The off design prediction capability of this code for radial inflow turbines shows accurate static pressure prediction. Solutions show a difference of 3 to 5 points between the experimentally obtained efficiencies and the calculated values.

  20. The effectiveness of family planning programs evaluated with true experimental designs.

    PubMed Central

    Bauman, K E

    1997-01-01

    OBJECTIVES: This paper describes the magnitude of effects for family planning programs evaluated with true experimental designs. METHODS: Studies that used true experimental designs to evaluate family planning programs were identified and their results subjected to meta-analysis. RESULTS: For the 14 studies with the information needed to calculate effect size, the Pearson r between program and effect variables ranged from -.08 to .09 and averaged .08. CONCLUSIONS: The programs evaluated in the studies considered have had, on average, smaller effects than many would assume and desire. PMID:9146451

  1. Experimental techniques for the characterization and development of thermal barrier coating bond coat alloys

    NASA Astrophysics Data System (ADS)

    Thompson, Robert J.

    Thermal barrier coatings, commonly used in modern gas turbines and jet engines, are dynamic, multilayered structures consisting of a superalloy substrate, an Al-rich bond coat, a thermally grown oxide, and a ceramic top coat. Knowledge of the disparate material properties for each of the constituents of a thermal barrier coating is crucial to both better understanding and improving the performance of the system. The efforts of this dissertation quantify fundamental aspects of two intrinsic strain mechanisms that arise during thermal cycling. This includes measurement of the thermal expansion behavior for bond coats and superalloys as well as establishing specific ternary compositions associated with a strain-inducing martensitic phase transformation, which is known to occur in Ni-rich bond coat alloys. In order to quantify the coefficient of thermal expansion for a number of actual alloys extracted from contemporary thermal barrier coating systems, this work employs a noncontact high temperature digital image correlation technique to nearly 1100°C. The examined materials include: two commercial superalloys, two as-deposited commercial bond coat alloys, and three experimental bond coat alloys. The as-deposited specimens were created using a diffusion aluminizing and a low pressure plasma spray procedure to thicknesses on the order of 50 and 100 mum, respectively. For the plasma sprayed bond coat, a comparison with a bulk counterpart of identical composition indicated that deposition procedures have little effect on thermal expansion. An analytical model of oxide rumpling is used to show that the importance of thermal expansion mismatch between a commercial bond coat and its superalloy substrate is relatively small. Considerably higher expansion values are noted for a Ni-rich bond coat alloy, however, and modeling which includes this layer suggests that it may have a substantial influence on rumpling. Combinatorial methods based on diffusion multiples are also

  2. The consequences of consumer diversity loss: different answers from different experimental designs.

    PubMed

    Byrnes, Jarrett E; Stachowicz, John J

    2009-10-01

    Predators are often the most vulnerable group to extinction, yet the consequences of changing predator diversity are poorly understood. One source of confusion has been different experimental designs. The multiple-predator effects literature typically employs an additive design, while the biodiversity ecosystem function literature typically uses a replacement design. Separately, these designs each detect only a subset of the changes in food web interactions caused by predator loss. Here, we measure the impact of consumer diversity on sessile marine invertebrates using a combination additive-replacement design. We couple this with a meta-analysis of previous combination experiments. We use these two approaches to explore how each design can detect different types of interactions among predators. We find that, while high diversity does lead to more negative interspecific interactions, the strength of these interactions is often weaker than negative intraspecific interactions caused by increasing the density of a single species alone. We conclude that a hybrid design is the optimal method to explore the mechanisms behind the effects of changing predator diversity. If researchers merely want to know the consequences of changing predator diversity, at a bare minimum, the experimental design must mimic the actual changes in both predator density and diversity in their system of interest. However, only a hybrid design can distinguish the consequences of shifting the balance of interspecific and intraspecific interactions within a community, an issue of great importance when considering both natural diversity loss and pest biocontrol.

  3. Adhesive Measurements of Polymer Bonded Explosive Constituents using the JKR Experimental Technique and Finite Element Modelling of Viscoelastic Adhesive Contact

    NASA Astrophysics Data System (ADS)

    Hamilton, Neil; Williamson, David; Lewis, Daniel; Glauser, Annette; Jardine, Andrew

    2015-06-01

    It has been shown experimentally that under many circumstances the strength limiting factor of Polymer Bonded Explosives (PBXs) is the adhesion which exists between the filler crystals and the polymer matrix. Experimental measurements of the Work of Adhesion between different binders and glass have been conducted using the JKR experimental technique; a reversible axisymmetric fracture experiment in which the area of contact and the applied force are both measured during loading and unloading of the interface. The data taken with this technique show a rate dependence not described by the analytical JKR theory, which considers only elastic bodies, that arises from the viscoelastic properties of the bulk polymer. To understand and describe the effects of viscosity on the adhesive measurements a finite element model (ABAQUS) of the idealized geometry used in the JKR experiment has been constructed. It is intended to bridge the gap between the purely elastic analytical JKR theory and the viscoelastic experimental results. Together, the experimental data and the computational model are intended to inform the development, and validate the predictions of, microstructural models of PBX deformation and failure.

  4. Information System Design Methodology Based on PERT/CPM Networking and Optimization Techniques.

    ERIC Educational Resources Information Center

    Bose, Anindya

    The dissertation attempts to demonstrate that the program evaluation and review technique (PERT)/Critical Path Method (CPM) or some modified version thereof can be developed into an information system design methodology. The methodology utilizes PERT/CPM which isolates the basic functional units of a system and sets them in a dynamic time/cost…

  5. Designing to Motivate: Motivational Techniques to Incorporate in E-Learning Experiences

    ERIC Educational Resources Information Center

    Hodges, Charles B.

    2004-01-01

    This paper addresses the construct of motivation as it relates to learning. Questions that will be discussed are (a) What is motivation, (b), how can motivation be incorporated in the instructional design process, and finally, (c) what motivational techniques have been used successfully in e-learning settings? Some general background information…

  6. The Ticket to Retention: A Classroom Assessment Technique Designed to Improve Student Learning

    ERIC Educational Resources Information Center

    Divoll, Kent A.; Browning, Sandra T.; Vesey, Winona M.

    2012-01-01

    Classroom assessment techniques (CATs) or other closure activities are widely promoted for use in college classrooms. However, research on whether CATs improve student learning are mixed. The authors posit that the results are mixed because CATs were designed to "help teachers find out what students are learning in the classroom and how well…

  7. Collaborative Sketching (C-Sketch)--An Idea Generation Technique for Engineering Design.

    ERIC Educational Resources Information Center

    Shah, Jami J.; Vargas-Hernandez, Noe; Summers, Joshua D.; Kulkarni, Santosh

    2001-01-01

    This paper describes the development and evaluation of C-Sketch, a technique for concept generation in a collaborative engineering design setting. Analysis of experiments conducted over five years concludes that C-Sketch not only has intrinsic merit, but also measures higher in all outcomes when compared to other approaches including Method 6-3-5…

  8. Assessment of the Design Efficacy of a Preschool Vocabulary Instruction Technique

    ERIC Educational Resources Information Center

    Roskos, Kathleen; Burstein, Karen

    2011-01-01

    Broad-stroke approaches to vocabulary teaching in preschool include effective instructional elements, yet may be too ill-structured to affect the vocabulary learning of children experiencing serious delays. Using a formative research approach, this study examines the design potential of a supplemental vocabulary instruction technique that…

  9. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  10. Cross-cultural patterns in emotion recognition: highlighting design and analytical techniques.

    PubMed

    Elfenbein, Hillary Anger; Mandal, Manas K; Ambady, Nalini; Harizuka, Susumu; Kumar, Surender

    2002-03-01

    This article highlights a range of design and analytical tools for studying the cross-cultural communication of emotion using forced-choice experimental designs. American, Indian, and Japanese participants judged facial expressions from all 3 cultures. A factorial experimental design is used, balanced n x n across cultures, to separate "absolute" cultural differences from "relational" effects characterizing the relationship between the emotion expressor and perceiver. Use of a response bias correction is illustrated for the tendency to endorse particular multiple-choice categories more often than others. Treating response bias also as an opportunity to gain insight into attributional style, the authors examined similarities and differences in response patterns across cultural groups. Finally, the authors examined patterns in the errors or confusions that participants make during emotion recognition and documented strong similarity across cultures.

  11. Reducing wrong patient selection errors: exploring the design space of user interface techniques.

    PubMed

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients' identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed.

  12. Reducing Wrong Patient Selection Errors: Exploring the Design Space of User Interface Techniques

    PubMed Central

    Sopan, Awalin; Plaisant, Catherine; Powsner, Seth; Shneiderman, Ben

    2014-01-01

    Wrong patient selection errors are a major issue for patient safety; from ordering medication to performing surgery, the stakes are high. Widespread adoption of Electronic Health Record (EHR) and Computerized Provider Order Entry (CPOE) systems makes patient selection using a computer screen a frequent task for clinicians. Careful design of the user interface can help mitigate the problem by helping providers recall their patients’ identities, accurately select their names, and spot errors before orders are submitted. We propose a catalog of twenty seven distinct user interface techniques, organized according to a task analysis. An associated video demonstrates eighteen of those techniques. EHR designers who consider a wider range of human-computer interaction techniques could reduce selection errors, but verification of efficacy is still needed. PMID:25954415

  13. Using R in experimental design with BIBD: An application in health sciences

    NASA Astrophysics Data System (ADS)

    Oliveira, Teresa A.; Francisco, Carla; Oliveira, Amílcar; Ferreira, Agostinho

    2016-06-01

    Considering the implementation of an Experimental Design, in any field, the experimenter must pay particular attention and look for the best strategies in the following steps: planning the design selection, conduct the experiments, collect observed data, proceed to analysis and interpretation of results. The focus is on providing both - a deep understanding of the problem under research and a powerful experimental process at a reduced cost. Mainly thanks to the possibility of allowing to separate variation sources, the importance of Experimental Design in Health Sciences is strongly recommended since long time. Particular attention has been devoted to Block Designs and more precisely to Balanced Incomplete Block Designs - in this case the relevance states from the fact that these designs allow testing simultaneously a number of treatments bigger than the block size. Our example refers to a possible study of inter reliability of the Parkinson disease, taking into account the UPDRS (Unified Parkinson's disease rating scale) in order to test if there are significant differences between the specialists who evaluate the patients performances. Statistical studies on this disease were described for example in Richards et al (1994), where the authors investigate the inter-rater Reliability of the Unified Parkinson's Disease Rating Scale Motor Examination. We consider a simulation of a practical situation in which the patients were observed by different specialists and the UPDRS on assessing the impact of Parkinson's disease in patients was observed. Assigning treatments to the subjects following a particular BIBD(9,24,8,3,2) structure, we illustrate that BIB Designs can be used as a powerful tool to solve emerging problems in this area. Once a structure with repeated blocks allows to have some block contrasts with minimum variance, see Oliveira et al. (2006), the design with cardinality 12 was selected for the example. R software was used for computations.

  14. Effects of experimental design on calibration curve precision in routine analysis.

    PubMed

    Pimentel, M F; Neto, B de B; Saldanha, T C; Araújo, M C

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data.

  15. Application of direct inverse analogy method (DIVA) and viscous design optimization techniques

    NASA Technical Reports Server (NTRS)

    Greff, E.; Forbrich, D.; Schwarten, H.

    1991-01-01

    A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.

  16. Experimental Design for Hanford Low-Activity Waste Glasses with High Waste Loading

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Vienna, John D.; Crum, Jarrod V.

    2015-07-24

    This report discusses the development of an experimental design for the initial phase of the Hanford low-activity waste (LAW) enhanced glass study. This report is based on a manuscript written for an applied statistics journal. Appendices A, B, and E include additional information relevant to the LAW enhanced glass experimental design that is not included in the journal manuscript. The glass composition experimental region is defined by single-component constraints (SCCs), linear multiple-component constraints (MCCs), and a nonlinear MCC involving 15 LAW glass components. Traditional methods and software for designing constrained mixture experiments with SCCs and linear MCCs are not directly applicable because of the nonlinear MCC. A modification of existing methodology to account for the nonlinear MCC was developed and is described in this report. One of the glass components, SO3, has a solubility limit in glass that depends on the composition of the balance of the glass. A goal was to design the experiment so that SO3 would not exceed its predicted solubility limit for any of the experimental glasses. The SO3 solubility limit had previously been modeled by a partial quadratic mixture model expressed in the relative proportions of the 14 other components. The partial quadratic mixture model was used to construct a nonlinear MCC in terms of all 15 components. In addition, there were SCCs and linear MCCs. This report describes how a layered design was generated to (i) account for the SCCs, linear MCCs, and nonlinear MCC and (ii) meet the goals of the study. A layered design consists of points on an outer layer, and inner layer, and a center point. There were 18 outer-layer glasses chosen using optimal experimental design software to augment 147 existing glass compositions that were within the LAW glass composition experimental region. Then 13 inner-layer glasses were chosen with the software to augment the existing and outer

  17. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    ERIC Educational Resources Information Center

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  18. A technique for designing active control systems for astronomical telescope mirrors

    NASA Technical Reports Server (NTRS)

    Howell, W. E.; Creedon, J. F.

    1973-01-01

    The problem of designing a control system to achieve and maintain the required surface accuracy of the primary mirror of a large space telescope was considered. Control over the mirror surface is obtained through the application of a corrective force distribution by actuators located on the rear surface of the mirror. The design procedure is an extension of a modal control technique developed for distributed parameter plants with known eigenfunctions to include plants whose eigenfunctions must be approximated by numerical techniques. Instructions are given for constructing the mathematical model of the system, and a design procedure is developed for use with typical numerical data in selecting the number and location of the actuators. Examples of actuator patterns and their effect on various errors are given.

  19. Analysis of the experimental level scheme of {sup 61}Cu using computational technique

    SciTech Connect

    Gupta, Anuradha Verma, Preeti; Bharti, Arun

    2015-08-28

    The high-spin structure in {sup 61}Cu nucleus is studied in terms of effective two body interaction. In order to take into account the deformed BCS basis, the basis states are expanded in terms of the core eigenfunctions. Yrast band with some other bands havew been obtained and back-bending in moment of inertia has also been calculated and compared with the available experimental data for {sup 61}Cu nucleus. On comparing the available experimental as well as other theoretical data, it is found that the treatment with PSM provides a satisfactory explanation of the available data.

  20. Monitoring and optimizing the co-composting of dewatered sludge: a mixture experimental design approach.

    PubMed

    Komilis, Dimitrios; Evangelou, Alexandros; Voudrias, Evangelos

    2011-09-01

    The management of dewatered wastewater sludge is a major issue worldwide. Sludge disposal to landfills is not sustainable and thus alternative treatment techniques are being sought. The objective of this work was to determine optimal mixing ratios of dewatered sludge with other organic amendments in order to maximize the degradability of the mixtures during composting. This objective was achieved using mixture experimental design principles. An additional objective was to study the impact of the initial C/N ratio and moisture contents on the co-composting process of dewatered sludge. The composting process was monitored through measurements of O(2) uptake rates, CO(2) evolution, temperature profile and solids reduction. Eight (8) runs were performed in 100 L insulated air-tight bioreactors under a dynamic air flow regime. The initial mixtures were prepared using dewatered wastewater sludge, mixed paper wastes, food wastes, tree branches and sawdust at various initial C/N ratios and moisture contents. According to empirical modeling, mixtures of sludge and food waste mixtures at 1:1 ratio (ww, wet weight) maximize degradability. Structural amendments should be maintained below 30% to reach thermophilic temperatures. The initial C/N ratio and initial moisture content of the mixture were not found to influence the decomposition process. The bio C/bio N ratio started from around 10, for all runs, decreased during the middle of the process and increased to up to 20 at the end of the process. The solid carbon reduction of the mixtures without the branches ranged from 28% to 62%, whilst solid N reductions ranged from 30% to 63%. Respiratory quotients had a decreasing trend throughout the composting process. PMID:21565440

  1. 78 FR 5162 - Designation of a Nonessential Experimental Population of Central Valley Spring-Run Chinook Salmon...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-24

    ... January 16, 2013 we, NMFS, published a proposed rule (78 FR 3381) to designate a nonessential experimental... Experimental Population of Central Valley Spring-Run Chinook Salmon Below Friant Dam in the San Joaquin River..., published a proposed rule to designate a nonessential experimental population of Central Valley...

  2. High-temperature materials testing with full-field strain measurement: experimental design and practice.

    PubMed

    Novak, Mark D; Zok, Frank W

    2011-11-01

    Experimental characterization of the thermomechanical response of ceramic composites at very high temperatures is plagued by challenges associated with imaging and strain measurement. The problems involve illumination, heat haze, and surface contrast. Techniques that address these challenges have been developed and implemented in a laser heating facility, enabling non-contact strain measurement via digital image correlation. The thermomechanical characterization of both a Ni-based superalloy and a C/SiC composite are used to demonstrate the efficacy of experimental practices in realizing such measurements at temperatures up to 1500 °C. PMID:22129007

  3. EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY. FINAL REPORT.

    ERIC Educational Resources Information Center

    KOHR, RICHARD L.; WOLFE, GEORGE P.

    AN EXPERIMENTAL PROGRAM IN ENGINEERING AND DESIGN DATA PROCESSING TECHNOLOGY WAS UNDERTAKEN TO DEVELOP A PROPOSED CURRICULUM OUTLINE AND ADMISSION STANDARDS FOR OTHER INSTITUTIONS IN THE PLANNING OF PROGRAMS TO TRAIN COMPUTER PROGRAMMERS. OF THE FIRST CLASS OF 26 STUDENTS, 17 COMPLETED THE PROGRAM AND 12 (INCLUDING ONE WHO DID NOT GRADUATE) WERE…

  4. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    EPA Science Inventory

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  5. Design and Experimental Investigation of a Single-stage Turbine with a Downstream Stator

    NASA Technical Reports Server (NTRS)

    Plohr, Henry W; Holeski, Donald E; Forrette, Robert E

    1957-01-01

    The high-work-output turbine had an experimental efficiency of 0.830 at the design point and a maximum efficiency of 0.857. The downstream stator was effective in providing axial flow out of the turbine for almost the whole range of turbine operation.

  6. Characterizing Variability in Smestad and Gratzel's Nanocrystalline Solar Cells: A Collaborative Learning Experience in Experimental Design

    ERIC Educational Resources Information Center

    Lawson, John; Aggarwal, Pankaj; Leininger, Thomas; Fairchild, Kenneth

    2011-01-01

    This article describes a collaborative learning experience in experimental design that closely approximates what practicing statisticians and researchers in applied science experience during consulting. Statistics majors worked with a teaching assistant from the chemistry department to conduct a series of experiments characterizing the variation…

  7. Guided-Inquiry Labs Using Bean Beetles for Teaching the Scientific Method & Experimental Design

    ERIC Educational Resources Information Center

    Schlueter, Mark A.; D'Costa, Allison R.

    2013-01-01

    Guided-inquiry lab activities with bean beetles ("Callosobruchus maculatus") teach students how to develop hypotheses, design experiments, identify experimental variables, collect and interpret data, and formulate conclusions. These activities provide students with real hands-on experiences and skills that reinforce their understanding of the…

  8. Experimental design applied to the formulation of lipsticks with particular features.

    PubMed

    Zanotti, F; Masiello, S; Bader, S; Guarneri, M; Vojnovic, D

    1998-08-01

    In our work a non-classical experimental design was applied to obtain lipsticks endowed with particular characteristics. Our aim was to formulate lipsticks that leave a brilliant and shiny colour application and have a transparent look. The emollient substances and the waxes (consistency factors) were identified as the main variables of the system. A two phase experimental strategy was thought out: the optimal quantities of consistency factors were selected using a Doehlert experimental matrix, whereas the correct mixtures of emollients were determined using a Scheffé simplex-centroid design. These two design were combined and a set of 49 experiments was obtained. The experiments carried out allowed the definition of a zone of two phases in which the objectives were attained: the correct types and appropriate quantities of emollients and waxes were determined. To find a possible correlation between some mixtures and the lipsticks' sensorial behaviour, differential scanning calorimetry was used. These results, in addition to those obtained using the experimental design allowed us to select the best lipstick formula. (c) Rapid Science Ltd. 1998. PMID:18505505

  9. An Experimental Two-Way Video Teletraining System: Design, Development and Evaluation.

    ERIC Educational Resources Information Center

    Simpson, Henry; And Others

    1991-01-01

    Describes the design, development, and evaluation of an experimental two-way video teletraining (VTT) system by the Navy that consisted of two classrooms linked by a land line to enable two-way audio/video communication. Trends in communication and computer technology for training are described, and a cost analysis is included. (12 references)…

  10. A Course on Experimental Design for Different University Specialties: Experiences and Changes over a Decade

    ERIC Educational Resources Information Center

    Martinez Luaces, Victor; Velazquez, Blanca; Dee, Valerie

    2009-01-01

    We analyse the origin and development of an Experimental Design course which has been taught in several faculties of the Universidad de la Republica and other institutions in Uruguay, over a 10-year period. At the end of the course, students were assessed by carrying out individual work projects on real-life problems, which was innovative for…

  11. Building upon the Experimental Design in Media Violence Research: The Importance of Including Receiver Interpretations.

    ERIC Educational Resources Information Center

    Potter, W. James; Tomasello, Tami K.

    2003-01-01

    Argues that the inclusion of viewer interpretation variables in experimental design and analysis procedures can greatly increase the methodology's ability to explain variance. Focuses attention on the between-group differences, while an analysis of how individual participants interpret the cues in the stimulus material focused attention on the…

  12. Experimental design applied to the formulation of lipsticks with particular features.

    PubMed

    Zanotti, F; Masiello, S; Bader, S; Guarneri, M; Vojnovic, D

    1998-08-01

    In our work a non-classical experimental design was applied to obtain lipsticks endowed with particular characteristics. Our aim was to formulate lipsticks that leave a brilliant and shiny colour application and have a transparent look. The emollient substances and the waxes (consistency factors) were identified as the main variables of the system. A two phase experimental strategy was thought out: the optimal quantities of consistency factors were selected using a Doehlert experimental matrix, whereas the correct mixtures of emollients were determined using a Scheffé simplex-centroid design. These two design were combined and a set of 49 experiments was obtained. The experiments carried out allowed the definition of a zone of two phases in which the objectives were attained: the correct types and appropriate quantities of emollients and waxes were determined. To find a possible correlation between some mixtures and the lipsticks' sensorial behaviour, differential scanning calorimetry was used. These results, in addition to those obtained using the experimental design allowed us to select the best lipstick formula. (c) Rapid Science Ltd. 1998.

  13. Whither Instructional Design and Teacher Training? The Need for Experimental Research

    ERIC Educational Resources Information Center

    Gropper, George L.

    2015-01-01

    This article takes a contrarian position: an "instructional design" or "teacher training" model, because of the sheer number of its interconnected parameters, is too complex to assess or to compare with other models. Models may not be the way to go just yet. This article recommends instead prior experimental research on limited…

  14. Multiple Measures of Juvenile Drug Court Effectiveness: Results of a Quasi-Experimental Design

    ERIC Educational Resources Information Center

    Rodriguez, Nancy; Webb, Vincent J.

    2004-01-01

    Prior studies of juvenile drug courts have been constrained by small samples, inadequate comparison groups, or limited outcome measures. The authors report on a 3-year evaluation that examines the impact of juvenile drug court participation on recidivism and drug use. A quasi-experimental design is used to compare juveniles assigned to drug court…

  15. Quiet Clean Short-haul Experimental Engine (QCSEE) Over The Wing (OTW) design report

    NASA Technical Reports Server (NTRS)

    1977-01-01

    The design, fabrication, and testing of two experimental high bypass geared turbofan engines and propulsion systems for short haul passenger aircraft are described. The propulsion technology required for future externally blown flap aircraft with engines located both under the wing and over the wing is demonstrated. Composite structures and digital engine controls are among the topics included.

  16. Development of the Neuron Assessment for Measuring Biology Students' Use of Experimental Design Concepts and Representations

    ERIC Educational Resources Information Center

    Dasgupta, Annwesa P.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students' competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not…

  17. SELF-INSTRUCTIONAL SUPPLEMENTS FOR A TELEVISED PHYSICS COURSE, STUDY PLAN AND EXPERIMENTAL DESIGN.

    ERIC Educational Resources Information Center

    KLAUS, DAVID J.; LUMSDAINE, ARTHUR A.

    THE INITIAL PHASES OF A STUDY OF SELF-INSTRUCTIONAL AIDS FOR A TELEVISED PHYSICS COURSE WERE DESCRIBED. THE APPROACH, EXPERIMENTAL DESIGN, PROCEDURE, AND TECHNICAL ASPECTS OF THE STUDY PLAN WERE INCLUDED. THE MATERIALS WERE PREPARED TO SUPPLEMENT THE SECOND SEMESTER OF HIGH SCHOOL PHYSICS. THE MATERIAL COVERED STATIC AND CURRENT ELECTRICITY,…

  18. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  19. Trade-offs in experimental designs for estimating post-release mortality in containment studies

    USGS Publications Warehouse

    Rogers, Mark W.; Barbour, Andrew B; Wilson, Kyle L

    2014-01-01

    Estimates of post-release mortality (PRM) facilitate accounting for unintended deaths from fishery activities and contribute to development of fishery regulations and harvest quotas. The most popular method for estimating PRM employs containers for comparing control and treatment fish, yet guidance for experimental design of PRM studies with containers is lacking. We used simulations to evaluate trade-offs in the number of containers (replicates) employed versus the number of fish-per container when estimating tagging mortality. We also investigated effects of control fish survival and how among container variation in survival affects the ability to detect additive mortality. Simulations revealed that high experimental effort was required when: (1) additive treatment mortality was small, (2) control fish mortality was non-negligible, and (3) among container variability in control fish mortality exceeded 10% of the mean. We provided programming code to allow investigators to compare alternative designs for their individual scenarios and expose trade-offs among experimental design options. Results from our simulations and simulation code will help investigators develop efficient PRM experimental designs for precise mortality assessment.

  20. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    NASA Astrophysics Data System (ADS)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should